Re: [osgi-dev] Time to move

2020-12-09 Thread Peter Kriens via osgi-dev
Wow, the end of an era ...

Thanks for all the work BJ. 

Kind regards,

Peter Kriens

> On 9 Dec 2020, at 21:41, BJ Hargrave via osgi-dev  
> wrote:
> 
> As part of the mission transfer to the Eclipse Foundation, the osgi.org and 
> mail.osgi.org servers will be soon shutting down.
>  
> So I have asked Eclipse [1] to provision a new osgi-users mail list to 
> replace this osgi-dev mail list. Once provisioned, the new list [2] should be 
> osgi-us...@eclipse.org.
>  
> Please look for this new list shortly and subscribe.
>  
> This is likely the last message from this list as the server is being 
> decommissioned.
>  
> See you on the other side...
>  
> [1]: https://bugs.eclipse.org/bugs/show_bug.cgi?id=569608 
> 
> [2]: https://accounts.eclipse.org/mailing-list/osgi-users 
> 
> --
> 
> BJ Hargrave
> Senior Technical Staff Member, IBM // office: +1 386 848 1781
> OSGi Fellow and CTO of the OSGi Alliance // mobile: +1 386 848 3788
> hargr...@us.ibm.com
> 
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Can I rely on service properties to glue together many combinations of services?

2020-09-18 Thread Peter Kriens via osgi-dev
I you want to do this without code it is easy to setup using a similar model 
then that I gave you, maybe even the same. However, it will be horror to use 
for coders since it is inevitable that these traits will interact, and often in 
unintended ways. They will likely also have to work together and then the type 
safety helps a lot.

However, there was a saying in the 80's that said that if someone wanted a 
language where you could just say what you wanted, then give him a lollipop. 
The problem you're trying to solve is ancient and would be worth a lot of money 
if you could pull it off. So far these systems generally tend to turn into 
bloated monsters that are hard to use for users as well as developers.

Unless you user trait integration is just selecting them on an object, you 
could take a look at Javascript. It is quite a nice integration language and 
there are zillion of tutorials. The trait model works very well for Javascript 
since you can easily add fields and functions and everything is visible to 
everybody. However, that means giving up type safety. 

Good luck, 

Peter Kriens

> On 18 Sep 2020, at 07:49, Zyle Moore  wrote:
> 
> Thanks so much for looking at this. I'll be playing around with the code you 
> provided and seeing where that gets me. An additional complexity I didn't 
> mention in the initial question was that I also want users, not developers, 
> of this game to have nearly as much flexibility with the "traits" as the 
> developers themselves. Meaning that developers would provide the traits, and 
> the users could configure them into whatever combination they want, without 
> the need to write code to wire it together, and figured I could leverage the 
> developer way more easily for the users, but it seems I still have a lot of 
> thinking to do. Thanks again.
> 
> On Thu, Sep 17, 2020 at 3:31 AM Peter Kriens  > wrote:
> You're moving in a direction that can be a siren song. You basically want to 
> add what I'd call 'traits' to actor dynamically. Now, this is not so 
> difficult to do. In Smalltalk and Javascript this is quite easy to do because 
> you can dynamically extend the fields of an instance. In Java, everything 
> must statically compile. The advantage is of course that then everything is 
> statically compiled and the compiler can help you tremendously to write 
> error-free code.
> 
> If you implement the trait model in a type safe way, it tends to become a lot 
> harder to write code for it. That is why it is very easy to implement such a 
> model in languages like Smalltalk, Javascript, and I recall Python. You just 
> add extra fields and methods. Since nothing is type checked in the compiler, 
> you do not have to provide any metadata that makes the compiler happy. 
> However, if you want to do this
> type safe in Java then the APIs become a lot more ugly. And if you're willing 
> to give up type safety, the API is a lot harder to use than plain Java 
> becausYou e you need to use some kind of property model.
> 
> That said, there is a trick I am very fond of. Actually it is two tricks. 
> First, instead of focusing how to implement something it often helps to view 
> it from the perspective how to use something. As we're all implementers, we 
> tend to want to provide it all to our customers, as easy as possible. 
> However, we often forget that the client just has more knowledge we as 
> providers have. A given client of these
> traits knows exactly which traits are needed. For type safety, that client 
> muse depend on the actual trait types. Something that is impossible for the 
> provider, the whole idea of the trait provider is to be oblivious of the 
> actual traits. Unlike the client ...
> 
> The second trick is the magic of interface proxies. It is already 6 years ago 
> I came up with the idea to use the annotation interfaces as a front for the 
> OSGi configuration data. I still love it every day I program a component. 
> I've used this trick in lots of places. It allows you to write code almost as 
> simple as in Python, Javascript, or Smalltalk but you get much more type 
> safety.
> 
> So if we look from the perspective of the provider we do actually have normal 
> Java type dependencies. A client that uses Entity and Position knows exactly 
> that it uses these types and it will have import packages to prove it. Since 
> it has type dependencies, OSGi tends to be invisible. If your code runs with 
> these dependencies all things work ok. This is very important in a dynamic 
> system.
> 
> So lets say we have a Trait Manager service. How would a client that has 
> Entity & Position wants to work?
> 
> interface Position { int x(); int y(); }
> interface Entity { String id(); }
> 
> So if the client wants to use both at the same time, they can create a new 
> interface:
> 
> interface ClientView extends Position, Entity {}
> 
> This new interfaces is your binding, it binds 

Re: [osgi-dev] Can I rely on service properties to glue together many combinations of services?

2020-09-17 Thread Peter Kriens via osgi-dev
You're moving in a direction that can be a siren song. You basically want to 
add what I'd call 'traits' to actor dynamically. Now, this is not so difficult 
to do. In Smalltalk and Javascript this is quite easy to do because you can 
dynamically extend the fields of an instance. In Java, everything must 
statically compile. The advantage is of course that then everything is 
statically compiled and the compiler can help you tremendously to write 
error-free code.

If you implement the trait model in a type safe way, it tends to become a lot 
harder to write code for it. That is why it is very easy to implement such a 
model in languages like Smalltalk, Javascript, and I recall Python. You just 
add extra fields and methods. Since nothing is type checked in the compiler, 
you do not have to provide any metadata that makes the compiler happy. However, 
if you want to do this
type safe in Java then the APIs become a lot more ugly. And if you're willing 
to give up type safety, the API is a lot harder to use than plain Java because 
you need to use some kind of property model.

That said, there is a trick I am very fond of. Actually it is two tricks. 
First, instead of focusing how to implement something it often helps to view it 
from the perspective how to use something. As we're all implementers, we tend 
to want to provide it all to our customers, as easy as possible. However, we 
often forget that the client just has more knowledge we as providers have. A 
given client of these
traits knows exactly which traits are needed. For type safety, that client muse 
depend on the actual trait types. Something that is impossible for the 
provider, the whole idea of the trait provider is to be oblivious of the actual 
traits. Unlike the client ...

The second trick is the magic of interface proxies. It is already 6 years ago I 
came up with the idea to use the annotation interfaces as a front for the OSGi 
configuration data. I still love it every day I program a component. I've used 
this trick in lots of places. It allows you to write code almost as simple as 
in Python, Javascript, or Smalltalk but you get much more type safety.

So if we look from the perspective of the provider we do actually have normal 
Java type dependencies. A client that uses Entity and Position knows exactly 
that it uses these types and it will have import packages to prove it. Since it 
has type dependencies, OSGi tends to be invisible. If your code runs with these 
dependencies all things work ok. This is very important in a dynamic system.

So lets say we have a Trait Manager service. How would a client that has Entity 
& Position wants to work?

interface Position { int x(); int y(); }
interface Entity { String id(); }

So if the client wants to use both at the same time, they can create a new 
interface:

interface ClientView extends Position, Entity {}

This new interfaces is your binding, it binds whatever types the client feels 
like. So assume we have a Trait Manager service:

interface TraitManager {
 T create( Class type )
}

The Trait Manager returns a proxy to an underlying host object. The proxy 
implements all the interfaces from the given type. Totally type safe and easy 
to use. 


void client() {
ClientView cv = tm.create(ClientView.class);
int cv.x();
String id = cv.id();
}


How to implement this? Well, the proxy must dispatch the method calls to an 
object that knows the trait type, either Position or Entity. We want those 
types to come from a Trait service.

interface Trait {
boolean handles(Class c);
void augment(TraitHost host);
}

The create method of the Trait Manager service could look like this:

public  T create(Class view) {
TraitHostImpl host = new TraitHostImpl();

for (Class c : view.getInterfaces()) {
findTrait(c).augment(host);
}
return view.cast(Proxy.newProxyInstance(view.getClassLoader(), 
new Class[] {
view
}, host));
}

The host object is a hidden object that contains the mapping from class -> 
instance. It acts as the Invocation Handler for the proxy
but it needs a public interface because the Trait services must be able to 
_add_ their trait to it. 

interface TraitHost {
 void addInstance(Class c, T instance);
}


So how would this look like if you want to provide a trait in a separate Bundle?

interface Entity { String id(); }

class EntityTrait implements TraitManager.Trait { 
@Override public void augment(TraitHost host) { 
String id = UUID.randomUUID() .toString(); 
host.addInstance(Entity.class, () -> id); 
} 
@Override public boolean 

Re: [osgi-dev] How granular should Capabilities be?

2020-08-24 Thread Peter Kriens via osgi-dev
Don't look for what you can do with capabilities, look what capabilities do for 
you :-)

The primary reason we have this model is that we tried to prevent runtime 
errors. We wanted to establish a dependency graph that was not just about 
depending on artifacts, but one that could specify more than that as well has 
handle some of the cases that do not fit well in a transitive graph like Maven.

So never use a req/cap _unless you have a real problem_. They are a pain in the 
ass and should only be used to reduce a bigger pain. So I consider your musings 
about the granularity are largely moot. What pain would be mitigated by 
describing a plus operator with a capability? It seems extremely unlikely that 
you would compose things on that level? Same for localized strings. Clearly a 
bundle that provides a German translation of a component would have a 
capability for this so that it can be selected in the runtime. But is strings 
would be too low.

The far majority of use cases of the req/cap model are the OSGi headers. 
Require-Bundle, Import-Package, Expert-Package, Require-ExecutionEnvironment, 
etc. We needed something else than Maven's transitive dependency graph because 
OSGi tried to make a component model. The idea was that you assembled 
components that communicated via services. Using a transitive dependency model 
based on components, quickly kills reusability because the transitive 
dependency tree explodes. I.e. Maven doesn't download the internet, people 
download the internet ... And a large transitive graph kills the fun of reuse 
as anybody knows that has to keep dragging more and more dependencies to the 
installation. Worse, large graphs inevitably have  version issues, that most 
people actually tape over. So instead, we required a service and its package, 
and allowed different components to provide an implementation. That model works 
very well for the reusability but it sucks for dependencies since there no 
longer is a singular dependency graph. At every dependency on a service you can 
have many implementations. To create a runtime, you need to make lots of 
decisions about the implementations. Since each implementation has its own 
dependency graph, this is complicated, and big part of why the model can be 
painful.

One thing that drove the model for me personally was my addiction to the Mac. 
This was early 2000 and Windows was king. More and more applications went full 
Windows because it was too hard to support them on Mac or the beginning Linux. 
I was hoping that a reusable market for component with specified interfaces 
would make it easy to support many platforms, or at least allow others to fill 
in he gaps. I also really disliked the idea that you would provide groups of 
bundles, aka features. These bundles have to live together in one framework & 
VM and there are very tight dependencies that need to be managed. The problem 
with features is that they don't solve the problem of combining bundles, they 
just more the problem up one level and then make the problem even harder.

Therefore the idea of a resolver was born that would take _all_ requirements 
into account for a given runtime. A resolver that would be able to verify that 
each component would have everything it needed when deployed, preventing 
dreaded 'Class Not Found' errors. At the core is basic software engineering 
that makes things work in all cases, not only the ones tested.

So what are the pains the model mitigates:

– Transitive dependencies based on identity (osgi.bundle). This is more or less 
the Maven model support
– Matching providers to consumers that do not have a direct dependency on 
(osgi.extender, osgi.service, osgi.package, osgi.ee, osgi.contract)
– Hardware dependencies. Since you can provide capabilities from the framework, 
an installation can provide unique capabilities. I.e. capabilities are 
explicitly not restricted to bundles alone as you claim. This is very 
interesting when you do embedded development. I.e. you could have a bundle that 
requires a high resolution screen.

You seem to worry about incompatible capabilities and requirements setup. 
Don't. The idea is that they are used to make sure that _before_ you deploy 
your application all requirements and capabilities are checked. If you start to 
do weird things and you get weird results, they invariably end in resolving 
errors. As I said before, the basic idea is to minimize the use of this 
technique as much as possible. For normal OSGi use, you rarely need to make 
your own namespaces. When working on JPM I developed the annotations to create 
requirements and capabilities directly from the code. This is a great feature 
and I am happy it got standardized in R6 but you have to be extremely careful 
to not go ballistic because it is so easy to use. Don't start your own 
namespace until you're absolutely sure what you're doing and there is no Java 
way to do it. As you seem to sense, using too many different custom 

Re: [osgi-dev] Can separate OSGi Deployment Packages use the same types of Resources?

2020-08-21 Thread Peter Kriens via osgi-dev
You might want to read https://www.aqute.biz/2020/03/17/req-caps.html 


It is a bit of work to get your build on the level so that you can use that 
model, it does require a high level on the Modularity Maturity Model[1]. 
However, you won't believe the benefits once you get there. On the highest 
level, it takes most of the pain out of development and allows you to focus on 
developing new functionality, the fun part. It is sadly our industry's best 
kept secret :-(

Kind regards,

Peter Kriens

[1]: https://www.aqute.biz/appnotes/modularity-maturity-model.html 




> On 21 Aug 2020, at 00:32, Zyle Moore  wrote:
> 
> Hi Peter, thanks for the clarification.
> 
> The spec did make it clear that what is contained in the DP is treated as a 
> unit, and that unit is very tightly contained. My thinking was that I could 
> have a tightly contained unit for the framework, and a tightly contained unit 
> for the resources, but this doesn't seem to be the right tool for the job.
> 
> You mentioned a resolver; what were you talking about when you referenced 
> this?
> 
> I'll ask another question, under a different subject, that gets to why I 
> wanted to use DPs in the first place, instead of trying to un-XY-problem it.
> 
> On Thu, Aug 20, 2020 at 1:53 AM Peter Kriens  > wrote:
> Yes, Deployment Packages may not overlap. As I recall we were never shy about 
> this in the spec since it was controversial? 
> 
> The problems you need to solve when you have the same bundle in different 
> DP's with different versions were deemed too much for the hardware we were 
> considering in 2003. The problem with DPs is that it is too easy to create 
> incompatible packages, the model has a lot of error state. If you have plenty 
> of disk space you and you can have plenty of memory and CPU you can isolate 
> many of those problems. However, in a VM on an embedded device there is no 
> such luxury. 
> 
> I also think Deployment Packages are superseded by what you can do with the 
> resolver. A resolver takes a set of initial requirements and calculates a 
> solution that takes all these problems like versions of the same bundle into 
> account. You then get a resolution for your platform. 
> 
> For my customers, I've helped developing several management systems. The one 
> that works best uses the initial requirements as the configuration. When a 
> remote gateway needs to be updated, we calculate the actual bundles. The 
> gateway then calculates the differences and installs/removes/updates the 
> bundles accordingly. 
> 
> This model is a lot less error prone.
> 
> Kind regards,
> 
> Peter Kriens
> 
> 
> 
> 
> > On 20 Aug 2020, at 03:30, Zyle Moore via osgi-dev  > > wrote:
> > 
> > Posted this at StackOverflow, but thought it might be better to ask here.
> > ---
> > The Deployment Admin Specification defines a way of creating Deployment 
> > Packages, which can contain Bundles and Resources.
> > 
> > I wanted to use the Deployment Admin to install two types of Deployment 
> > Packages. The first Deployment Package would contain the "framework" of the 
> > program; including all infrastructure level services and bundles, like the 
> > Configuration Admin, Declarative Services, etc. The second Deployment 
> > Package would contain the "data" of the program; almost exclusively 
> > Resources to be processed by the Resource Processors in the framework 
> > package.
> > 
> > In this situation, the Resource Processors would be in Customizer bundles 
> > in the framework package.
> > 
> > When trying to do it this way, though, the second package is recognized as 
> > a foreign deployment package, and therefore can't be installed. The 
> > Specification, (114.5, Customizer) mentions
> > 
> > > As a Customizer bundle is started, it should register one or more 
> > > Resource Processor services. These
> > Resource Processor services must only be used by resources originating from 
> > the same Deployment
> > Package. Customizer bundles must never process a resource from another 
> > Deployment Package,
> > which must be ensured by the Deployment Admin service.
> > 
> > To check if this was true, I looked to the Auto Configuration Specification 
> > (115), which specifies an Autoconf Resource Processor, which is an 
> > extension of the Deployment Package Specification. It specifies a Resource 
> > Processor implementation, and processes Resources from the Deployment 
> > Package.
> > 
> > Based on this, if I wanted to use the Auto Configuration Specification, 
> > then I would seemingly need to include the Auto Configuration Bundle, and 
> > the Autoconf Resources within the same Deployment Package. This seems to 
> > result in another problem though. The Auto Configuration Bundle wouldn't be 
> > able to be used by another package, since a Resource 

Re: [osgi-dev] Can separate OSGi Deployment Packages use the same types of Resources?

2020-08-20 Thread Peter Kriens via osgi-dev
Yes, Deployment Packages may not overlap. As I recall we were never shy about 
this in the spec since it was controversial? 

The problems you need to solve when you have the same bundle in different DP's 
with different versions were deemed too much for the hardware we were 
considering in 2003. The problem with DPs is that it is too easy to create 
incompatible packages, the model has a lot of error state. If you have plenty 
of disk space you and you can have plenty of memory and CPU you can isolate 
many of those problems. However, in a VM on an embedded device there is no such 
luxury. 

I also think Deployment Packages are superseded by what you can do with the 
resolver. A resolver takes a set of initial requirements and calculates a 
solution that takes all these problems like versions of the same bundle into 
account. You then get a resolution for your platform. 

For my customers, I've helped developing several management systems. The one 
that works best uses the initial requirements as the configuration. When a 
remote gateway needs to be updated, we calculate the actual bundles. The 
gateway then calculates the differences and installs/removes/updates the 
bundles accordingly. 

This model is a lot less error prone.

Kind regards,

Peter Kriens




> On 20 Aug 2020, at 03:30, Zyle Moore via osgi-dev  
> wrote:
> 
> Posted this at StackOverflow, but thought it might be better to ask here.
> ---
> The Deployment Admin Specification defines a way of creating Deployment 
> Packages, which can contain Bundles and Resources.
> 
> I wanted to use the Deployment Admin to install two types of Deployment 
> Packages. The first Deployment Package would contain the "framework" of the 
> program; including all infrastructure level services and bundles, like the 
> Configuration Admin, Declarative Services, etc. The second Deployment Package 
> would contain the "data" of the program; almost exclusively Resources to be 
> processed by the Resource Processors in the framework package.
> 
> In this situation, the Resource Processors would be in Customizer bundles in 
> the framework package.
> 
> When trying to do it this way, though, the second package is recognized as a 
> foreign deployment package, and therefore can't be installed. The 
> Specification, (114.5, Customizer) mentions
> 
> > As a Customizer bundle is started, it should register one or more Resource 
> > Processor services. These
> Resource Processor services must only be used by resources originating from 
> the same Deployment
> Package. Customizer bundles must never process a resource from another 
> Deployment Package,
> which must be ensured by the Deployment Admin service.
> 
> To check if this was true, I looked to the Auto Configuration Specification 
> (115), which specifies an Autoconf Resource Processor, which is an extension 
> of the Deployment Package Specification. It specifies a Resource Processor 
> implementation, and processes Resources from the Deployment Package.
> 
> Based on this, if I wanted to use the Auto Configuration Specification, then 
> I would seemingly need to include the Auto Configuration Bundle, and the 
> Autoconf Resources within the same Deployment Package. This seems to result 
> in another problem though. The Auto Configuration Bundle wouldn't be able to 
> be used by another package, since a Resource Processor can only be used by 
> Resources in the same package; additionally that Bundle couldn't be used by a 
> separate, unrelated deployment package, because the Autoconf Bundle is 
> already registered by the first package.
> 
> It seems like if two Deployment Packages wanted to use Autoconf Resources, we 
> would be blocked because the Resource Processor is either installed as a 
> Bundle, and therefore unusable by the Deployment Admin because it is not in a 
> Deployment Package, or as part of a single Deployment Package, and no other 
> Deployment Package can ever use that version of that Bundle.
> 
> Is my understanding correct then, that two Deployment Packages not only have 
> to be separate, but also completely unrelated, to the point of not reusing 
> even a single Bundle? If this is the case, then does that mean that it is 
> impossible to have two Deployment Packages with Autoconf Resources; or at 
> least, two packages with the same type of Resource?
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev


Re: [osgi-dev] OSGi Compendium vs Enterprise specification

2020-05-06 Thread Peter Kriens via osgi-dev
OSGi consists of the the Core specification, that defines the framework, and 
the compendium specification, which is the aggregate of _all_ current OSGi 
_service_ specifications. Each service specification has a unique chapter 
number.

The Enterprise, Residential, and IoT specifications are a subset of the 
Compendium and use the same chapter numbers. The purpose of these aggregates is 
to select the specifications that are useful for a specific audience. However, 
the same chapter number and version in different aggregate specifications is 
exactly the same service specification. 

Each specification has a PDF that contains the textual specification, a test 
suite, a designated reference implementation and a JAR containing all the Java 
APIs in that specification. 

The aggregate JARs, compendium, enterprise, etc, are not a severe problem to 
compile against but turned out to be problematic in the runtime assembly. 
Because they combine a set of service specification versions it becomes very 
hard to use an older, or newer version of a specific service. You're basically 
locked into that set of versions. This can severely restrict your choice in the 
set of bundles in your runtime. There you'd like to pick and choose and not be 
artificially constrained.

Therefore, a few years ago, the OSGi started to publish a JAR per service 
specification as well. This will allow you to compile against any version of a 
service and it will allow the runtime assembly a lot of freedom in mixing and 
matching version.

Hope this helps, kind regards,

Peter Kriens


> On 6 May 2020, at 09:35, CHINCHOLE, SHANTESHWAR ASHOKRAO via osgi-dev 
>  wrote:
> 
> Hi,
>  
> What exactly is the difference between compendium and enterprise 
> specification of OSGi. What was the intent of introducing these 2 different 
> specifications?
>  
> I came across a note on the forum saying – “The compendium jar was never 
> meant to be used at runtime but people abused it anyway”. Does it mean that, 
> we can drop usage of compendium jar going forward?
>  
> Thanks & Regards,
> Shanteshwar C
>  
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org 
> https://mail.osgi.org/mailman/listinfo/osgi-dev 
> 
___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Correct processor types for ARM?

2020-04-14 Thread Peter Kriens via osgi-dev
I agree but I am not sure any member has enough interest to drive that through 
a spec. It would still require the VMs to report it properly.

Kind regards,

Peter Kriens

> On 13 Apr 2020, at 19:57, chris.g...@kiffer.be wrote:
> 
> Hi Peter,
> 
> I would argue that it is "os.arch" which is a bit of a mess, because it
> attempts to represent too much in a single name. Compare this with the set
> of "triples" here :
> http://llvm.org/doxygen/classllvm_1_1Triple.html
> 
> I would argue that these definitions would be a more useful way to specify
> the target for native code, as they correspond to the way that code is
> compiled for the various targets.
> 
> Kind Regards
> 
> Chris
> 
> 
>> That is my experience on the ARM processor, there are so many variations,
>> 32/64, le/be, floating point/no floating point, etc. that it is a bit of a
>> mess.
>> 
>> In general, on ARM I see people define the properties themselves to
>> whatever the VM they use reports.
>> 
>> Kind regards,
>> 
>>  Peter Kriens
>> 
>> 
>>> On 13 Apr 2020, at 18:22, Markus Rathgeb via osgi-dev
>>>  wrote:
>>> 
>>> This has been already done by someone here:
>>> https://stackoverflow.com/a/57893125
>>> It seems os.arch is not really "stable" at all:
>>> https://bugs.openjdk.java.net/browse/JDK-8167584
>>> ___
>>> OSGi Developer Mail List
>>> osgi-dev@mail.osgi.org
>>> https://mail.osgi.org/mailman/listinfo/osgi-dev
>> 
>> ___
>> OSGi Developer Mail List
>> osgi-dev@mail.osgi.org
>> https://mail.osgi.org/mailman/listinfo/osgi-dev
>> 
> 
> 

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev


Re: [osgi-dev] Correct processor types for ARM?

2020-04-13 Thread Peter Kriens via osgi-dev
That is my experience on the ARM processor, there are so many variations, 
32/64, le/be, floating point/no floating point, etc. that it is a bit of a mess.

In general, on ARM I see people define the properties themselves to whatever 
the VM they use reports.

Kind regards,

Peter Kriens


> On 13 Apr 2020, at 18:22, Markus Rathgeb via osgi-dev 
>  wrote:
> 
> This has been already done by someone here: 
> https://stackoverflow.com/a/57893125
> It seems os.arch is not really "stable" at all:
> https://bugs.openjdk.java.net/browse/JDK-8167584
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev


Re: [osgi-dev] Intermittent failure to getService

2020-03-11 Thread Peter Kriens via osgi-dev
It all depends ... In general state can be managed in many different ways. 
Prototype scope is too much in your face, too much boilerplate, that is why I 
avoid it. It works nicely behind the scenes though like DS and CDI. I do not 
think I've ever used them so far. (Which is a self perpetuating truth, I know.)

PK



> On 11 Mar 2020, at 13:26, Alain Picard  wrote:
> 
> Peter and Tim,
> 
> Thanks for the pointers. The error was caused by some invalid use of a 
> disposed object. This was using factory components and I switched all of it 
> to use prototype components instead which IMHO are easier to manage.
> 
> And Peter to your question about using prototype scope, those objects contain 
> state and it is my understanding that prototype scope is required in those 
> cases.
> 
> Thanks
> Alain
> 
> 
> On Mon, Mar 2, 2020 at 9:39 AM Peter Kriens  > wrote:
> Some remarks:
> 
> * Yes, it is thread safe. In OSGi we mark all thread safe types with the 
> @ThreadSafe annotation.
> * The error message is not in the log you listed. Since the log contains a 
> deactivation message, I hope you're handling the case corrected that you're 
> being called after deactivation? Seems too simple, but anyway ... :-)
> 
> * And for something completely different, is there any reason you use the 
> prototype scope? You real code might need it but for this code it just looks 
> like making it accidentally complex?
> * And last but not least, you seem to be using slf4j? Did you wire up the 
> OSGi log to it? I've seen cases where the information was in the OSGi log but 
> those messages were discarded.
> 
> Kind regards,
> 
>   Peter Kriens
> 
> 
>> On 2 Mar 2020, at 12:03, Alain Picard via osgi-dev > > wrote:
>> 
>> Question: The method getDiagnosticForEObject can be called by different 
>> threads. Can this be the source of the issue? I see that 
>> ComponentServiceObject is tagged as ThreadSafe, but?
>> 
>> Alain
>> 
>> 
>> On Mon, Mar 2, 2020 at 5:47 AM Alain Picard > > wrote:
>> Tim,
>> 
>> I don't think so. BaValidationManagerExt is used in only 1 place and it is 
>> instantiated in activate and released in deactivate:
>> @Component(
>> factory = ValidationManager.CONFIG_FACTORY, 
>> service = ValidationManager.class
>> )
>> public final class CoreValidationManager extends 
>> CDODefaultTransactionHandler1 implements ValidationManager, 
>> CDOTransactionHandler2 {
>> ...
>> @Reference(scope=ReferenceScope.PROTOTYPE_REQUIRED)
>> private ComponentServiceObjects extenderFactory;
>> private ValidationManagerExt extender;
>> 
>> @Activate
>> private void activate() {
>> log.trace("Activating {}", getClass()); //$NON-NLS-1$
>> 
>> extender = extenderFactory.getService();
>> }
>> 
>> @Deactivate
>> private void deactivate() {
>> log.trace("Deactivating {}", getClass()); //$NON-NLS-1$
>> extenderFactory.ungetService(extender);
>> }
>> 
>> Cheers,
>> Alain
>> 
>> Alain Picard
>> Chief Strategy Officer
>> Castor Technologies Inc
>> o:514-360-7208
>> m:813-787-3424
>> 
>> pic...@castortech.com 
>> www.castortech.com 
>> 
>> On Mon, Mar 2, 2020 at 3:40 AM Tim Ward > > wrote:
>> Hi Alain,
>> 
>> Is it possible that someone has a reference to a BaValidationManagerExt 
>> service instance that they aren’t releasing after ungetting it (or that 
>> they’re holding onto after it has been unregistered)? It might be an SCR 
>> bug, but it’s more likely to be some code holding onto a component instance 
>> that it shouldn’t.
>> 
>> Best Regards,
>> 
>> Tim
>> 
>>> On 29 Feb 2020, at 13:29, Alain Picard via osgi-dev >> > wrote:
>>> 
>>> Hi
>>> 
>>> I am having a very intermittent issue with getService on a prototype 
>>> component. This is called hundreds of times and I put a breakpoint a few 
>>> weeks ago and have now gotten the error.
>>> 
>>> I have this class:
>>> @Component(scope=ServiceScope.PROTOTYPE,
>>> property= org.osgi.framework.Constants.SERVICE_RANKING + ":Integer=10"
>>> )
>>> public final class BaValidationManagerExt implements ValidationManagerExt {
>>> private final Logger log = LoggerFactory.getLogger(getClass());
>>> 
>>> @Reference(scope = ReferenceScope.PROTOTYPE_REQUIRED)
>>> private ComponentServiceObjects validatorFactory;
>>> 
>>> @Activate
>>> private void activate() {
>>> log.trace("Activating {}/{}", getClass(), System.identityHashCode(this)); 
>>> //$NON-NLS-1$
>>> }
>>> 
>>> @Deactivate
>>> private void deactivate() {
>>> log.trace("Deactivating {}/{}", getClass(), System.identityHashCode(this)); 
>>> //$NON-NLS-1$
>>> }
>>> 
>>> @Override
>>> public Diagnostic getDiagnosticForEObject(EObject eObj) {
>>> log.trace("Getting diagnostic for {}", eObj); //$NON-NLS-1$
>>> Validator validator = validatorFactory.getService();
>>> 
>>> if (validator != null) {
>>> try {
>>> return 

[osgi-dev] Android, OSGi Connect, & bnd

2020-03-03 Thread Peter Kriens via osgi-dev
I had to do a Proof of Concept of using OSGi applications on an Android 
embedded platform. This turned out to need OSGi Connect, an upcoming OSGi Spec. 
If you're interested:

https://www.aqute.biz/2020/03/02/osgi-on-android.html 


Kind regards,

Peter Kriens

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Intermittent failure to getService

2020-03-02 Thread Peter Kriens via osgi-dev
Some remarks:

* Yes, it is thread safe. In OSGi we mark all thread safe types with the 
@ThreadSafe annotation.
* The error message is not in the log you listed. Since the log contains a 
deactivation message, I hope you're handling the case corrected that you're 
being called after deactivation? Seems too simple, but anyway ... :-)

* And for something completely different, is there any reason you use the 
prototype scope? You real code might need it but for this code it just looks 
like making it accidentally complex?
* And last but not least, you seem to be using slf4j? Did you wire up the OSGi 
log to it? I've seen cases where the information was in the OSGi log but those 
messages were discarded.

Kind regards,

Peter Kriens


> On 2 Mar 2020, at 12:03, Alain Picard via osgi-dev  
> wrote:
> 
> Question: The method getDiagnosticForEObject can be called by different 
> threads. Can this be the source of the issue? I see that 
> ComponentServiceObject is tagged as ThreadSafe, but?
> 
> Alain
> 
> 
> On Mon, Mar 2, 2020 at 5:47 AM Alain Picard  > wrote:
> Tim,
> 
> I don't think so. BaValidationManagerExt is used in only 1 place and it is 
> instantiated in activate and released in deactivate:
> @Component(
> factory = ValidationManager.CONFIG_FACTORY, 
> service = ValidationManager.class
> )
> public final class CoreValidationManager extends 
> CDODefaultTransactionHandler1 implements ValidationManager, 
> CDOTransactionHandler2 {
> ...
> @Reference(scope=ReferenceScope.PROTOTYPE_REQUIRED)
> private ComponentServiceObjects extenderFactory;
> private ValidationManagerExt extender;
> 
> @Activate
> private void activate() {
> log.trace("Activating {}", getClass()); //$NON-NLS-1$
> 
> extender = extenderFactory.getService();
> }
> 
> @Deactivate
> private void deactivate() {
> log.trace("Deactivating {}", getClass()); //$NON-NLS-1$
> extenderFactory.ungetService(extender);
> }
> 
> Cheers,
> Alain
> 
> Alain Picard
> Chief Strategy Officer
> Castor Technologies Inc
> o:514-360-7208
> m:813-787-3424
> 
> pic...@castortech.com 
> www.castortech.com 
> 
> On Mon, Mar 2, 2020 at 3:40 AM Tim Ward  > wrote:
> Hi Alain,
> 
> Is it possible that someone has a reference to a BaValidationManagerExt 
> service instance that they aren’t releasing after ungetting it (or that 
> they’re holding onto after it has been unregistered)? It might be an SCR bug, 
> but it’s more likely to be some code holding onto a component instance that 
> it shouldn’t.
> 
> Best Regards,
> 
> Tim
> 
>> On 29 Feb 2020, at 13:29, Alain Picard via osgi-dev > > wrote:
>> 
>> Hi
>> 
>> I am having a very intermittent issue with getService on a prototype 
>> component. This is called hundreds of times and I put a breakpoint a few 
>> weeks ago and have now gotten the error.
>> 
>> I have this class:
>> @Component(scope=ServiceScope.PROTOTYPE,
>> property= org.osgi.framework.Constants.SERVICE_RANKING + ":Integer=10"
>> )
>> public final class BaValidationManagerExt implements ValidationManagerExt {
>> private final Logger log = LoggerFactory.getLogger(getClass());
>> 
>> @Reference(scope = ReferenceScope.PROTOTYPE_REQUIRED)
>> private ComponentServiceObjects validatorFactory;
>> 
>> @Activate
>> private void activate() {
>> log.trace("Activating {}/{}", getClass(), System.identityHashCode(this)); 
>> //$NON-NLS-1$
>> }
>> 
>> @Deactivate
>> private void deactivate() {
>> log.trace("Deactivating {}/{}", getClass(), System.identityHashCode(this)); 
>> //$NON-NLS-1$
>> }
>> 
>> @Override
>> public Diagnostic getDiagnosticForEObject(EObject eObj) {
>> log.trace("Getting diagnostic for {}", eObj); //$NON-NLS-1$
>> Validator validator = validatorFactory.getService();
>> 
>> if (validator != null) {
>> try {
>> return validator.runValidation(false, Collections.singletonMap(eObj, new 
>> HashSet<>()),
>> new NullProgressMonitor()).getB();
>> }
>> finally {
>> validatorFactory.ungetService(validator);
>> }
>> }
>> else {
>> log.error("Validator Service not found for {}", eObj, new Throwable()); 
>> //$NON-NLS-1$
>> return Diagnostic.CANCEL_INSTANCE;
>> }
>> }
>> }
>> 
>> and the validator:
>> @Component(
>> scope = ServiceScope.PROTOTYPE,
>> property= org.osgi.framework.Constants.SERVICE_RANKING + ":Integer=10"
>> )
>> public final class BaValidator implements Validator {
>> private final Logger log = LoggerFactory.getLogger(getClass());
>> 
>> private Map> elementsToValidate;
>> private Set validated = Sets.newHashSet();
>> private boolean batch;
>> 
>> private EditingDomain domain;
>> private AdapterFactory adapterFactory;
>> 
>> @Reference
>> private volatile List validationProviders;  //NOSONAR as 
>> per OSGi 112.3.9.1 
>> 
>> @Reference
>> private ValidationUtils validationUtils;
>> 
>> @Activate
>> private void activate() {
>> log.trace("Activating {}/{}", getClass(), 

[osgi-dev] Bndtools starter

2020-03-02 Thread Peter Kriens via osgi-dev
I've written a little booklet and created a number of videos to make it easier 
to get started with bndtools. 

https://bndtools.org/workspace.html

If you have any feedback, let me know. I heard from a lot of people that they 
had a hard time starting with the Bndtools which is a real shame. When properly 
setup, it is hard to beat with the Gradle build for Github Actions and the 
almost script language like fluidity of Bndtools.

Again, let me know if I can make improvements. And if you're interested in 
providing an audio tape with a native speaker with a better voice for the 
videos, don't hesitate to contact me. Would be appreciated.

Kind regards,

Peter Kriens

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev


Re: [osgi-dev] @ConsumerType vs @ProviderType

2019-10-17 Thread Peter Kriens via osgi-dev
It is surprisingly simple. :-)

Let's assume Oracle adds a new method to `java.nio.file.Path`. Would you care? 
Unless you work for Azul, you'd like couldn't give a rats ass. Once you use 
that new method you care, but before that moment it is irrelevant to you. That 
makes you a _consumer_ of the `java.nio.file` package. Azul and Oracle are, 
however, _providers_ of this package. Oracle and Azul do care when this method 
gets added because you would be very upset if the method is not there. However, 
if Oracle added a method to `java.io.Serializable` you would likely be pretty 
upset because you are a _consumer_ of the `java.io ` package.

When you make a change to an API package you might break users of that package. 
To distinguish between the Azuls and Oracles of this world and mere mortals 
OSGi/bnd allows you to mark a type to be **only** affecting _providers_. I.e. 
the Path interface would be a _provider type_. (The @ConsumerType is default 
and just breaks everybody when you make a binary incompatible change.)

The archetypical example is Event Admin. If we add a method to the EventAmin 
interface we only break the 5 or 6 implementations of Event Admin. And we 
should break those implementation because they need to implement this new 
method. However, the 50 million bundles that depend on Event Admin are not 
affected since this change is fully backward compatible for them. However, if 
we add a method to Event Handler then we break all those 50 million bundles 
that implement Event Handler. That is why  the `EventHandler` interface is a 
@ConsumerType.

So the sole purpose of the @ProviderType annotation is to make a breaking 
change but indicate that only, the usually smaller number of, bundles that 
_provide_ the API should be broken instead of the majority of bundles that only 
_consume_ this API.

The key thing to realize is that provider/consumer is about the API **package** 
and not an interface. If you make a binary incompatible change, the key 
question is does it affect everybody (consumer and providers) or can I limit it 
to a smaller number of bundles (providers)?

PK

> On 16 Oct 2019, at 21:15, Leschke, Scott via osgi-dev 
>  wrote:
> 
> I’m trying to wrap my head around these two annotations and I’m struggling a 
> bit. Is the perspective of the provider and consumer roles from a bundle 
> perspective or an application perspective?
> I’ve read the Semantic Versioning whitepaper a number of times and it doesn’t 
> really clear things up for me definitely.
>  
> If an application has an API bundle that only exposes Interfaces and Abstract 
> classes, and those interfaces are implemented by other bundles in the app, 
> are those bundles providers or consumers? My inclination is that they’re 
> providers but then when does a bundle become a consumer?  Given that API 
> bundles are compile only (this is the rule right?), would a good rule be that 
> if you implement the interface and export the package it’s in, that type 
> would be @ProviderType, if you don’t implement it it’s @ConsumeType?
>  
> It would seem to me that @ProviderType would be the default using this logic, 
> as opposed to @ConsumerType, which leads me to believe that I’m thinking 
> about this wrong.
>  
> Any help appreciated as always.
>  
> Regards,
>  
> Scott Leschke
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org 
> https://mail.osgi.org/mailman/listinfo/osgi-dev 
> 
___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Best way to move an OSGi application to Docker plus K8 ?

2019-08-07 Thread Peter Kriens via osgi-dev
Christian,

Yes, for JPM, which feels very long ago, I had a base image and then added only 
an executable JAR as a new layer.

However, since I've seen the stats for porn sites on the Internet the size of 
images got me a lot less worried :-) I've spent a lot of my working life trying 
to make things work in small spaces and trying to make them efficient. However, 
in the last decade I've been bypassed by several people that just could not 
care less about these issues and thereby gained speed. So I am trying to also 
become less obsessed with efficiency because it does remove a lot of 
complexity. Today, quite often keeping it simple and stupid is actually good 
enough.

Kind regards,

Peter Kriens



> On 7 Aug 2019, at 09:49, Christian Schneider  wrote:
> 
> Hi Peter,
> 
> I also thought a bit about how to possibly make the deployments smaller.
> 
> I see three possible solutions:
> 1. Put the main version of a system in a docker layer and upgrades in a layer 
> on top.
> 
> An example where this might make sense is Adobe Experience Manager. You put 
> the bundles of AEM 6.5 in one docker layer. Each file name must uniquely map 
> to a bundle symbolic name + version (or maven coordinates).
> Then for any updates you overwrite or add the changed bundles in a layer on 
> top and overwrite the runbundles list.
> 
> Even if you write out the full list of bundles again for any update docker 
> will be able to create an effective diff that makes your new docker image 
> pretty small.
> 
> 2. Put your dependencies in one docker layer and your own bundles in another
> 
> Typically you have more changes to your own code than changes to 
> dependencies. Also typically your own code is much smaller than your 
> dependencies. 
> 
> So this approach can also make your docker images (needed for an upgrade) a 
> lot smaller.
> 
> 3. Pull bundles from maven repository (maybe with local caching)
> 
> In this case you only deploy the list of runbundles with maven coordinates 
> and a starter in the docker image. The starter then downloads the bundles 
> from a maven repo. To make this fast you would need a local cache of bundles 
> that survives restart of a pod. (Not sure how to best achieve this for k8s 
> but I am sure it can be done).
> 
> What do you think? What concept do you have in mind?
> 
> Christian
> 
> Am Di., 6. Aug. 2019 um 16:58 Uhr schrieb Peter Kriens via osgi-dev 
> mailto:osgi-dev@mail.osgi.org>>:
> I don't see any reason why upgrading your existing workflow with a static 
> Docker container and then updating the bundles would not work.
> 
> However ...
> 
> Just look at the number of moving parts that you then need in runtime. 
> Creating a Docker image in the build is trivial and deploying it to 
> Kubernetes is, well less than trivial. You need to have a lot of things going 
> right on different systems and much more configuration to make the dynamic 
> update work. All these moving parts can fail.
> 
> The only issue is size & time. If you need to deploy a full docker image to 
> hundreds of thousands of machines the update can be done more efficiently 
> using OSGi bundles. And actually that is a solution I'm working on at the 
> moment. However, if time/size is not a concern I find a full docker image 
> disturbingly hard to beat. About 5 years ago I ran the 'Java Package Manager' 
> web site on Kubernetes and it worked scarily easy and reliable.
> 
> When I am hired to help one of the first things I look for is reduce as many 
> of the moving parts as possible. Yes, you can get anything to work but 
> reducing the possible errors cases really increases reliability imho.
> 
> Kind regards,
> 
> Peter Kriens
> 
> 
> > On 6 Aug 2019, at 16:28, Cristiano via osgi-dev  > <mailto:osgi-dev@mail.osgi.org>> wrote:
> > 
> > Hello all,
> > 
> > I have a challenged POC to do in order to dockerize an existent OSGi
> > based application and then deploy it to a Kubernetes based cloud.
> > 
> > I'm not totally aware of k8 features yet, so I have some doubts that I
> > would like discuss here.
> > 
> > The main doubt is related to our existent upgrading process which
> > currently we upload a R5 repository to a webserver and then a node
> > management agent bundle access it and upgrades the necessary app bundles.
> > 
> > Many examples I saw in the web creates a docker image in their building
> > process and delivery an image at each dev cycle. I don't like much of
> > this idea, so initially I thought to mimic our existent process in a
> > docker container just setting up a Volume in order to upload the newer
> > repositories.
> > 
> >

Re: [osgi-dev] Best way to move an OSGi application to Docker plus K8 ?

2019-08-06 Thread Peter Kriens via osgi-dev
I don't see any reason why upgrading your existing workflow with a static 
Docker container and then updating the bundles would not work.

However ...

Just look at the number of moving parts that you then need in runtime. Creating 
a Docker image in the build is trivial and deploying it to Kubernetes is, well 
less than trivial. You need to have a lot of things going right on different 
systems and much more configuration to make the dynamic update work. All these 
moving parts can fail.

The only issue is size & time. If you need to deploy a full docker image to 
hundreds of thousands of machines the update can be done more efficiently using 
OSGi bundles. And actually that is a solution I'm working on at the moment. 
However, if time/size is not a concern I find a full docker image disturbingly 
hard to beat. About 5 years ago I ran the 'Java Package Manager' web site on 
Kubernetes and it worked scarily easy and reliable.

When I am hired to help one of the first things I look for is reduce as many of 
the moving parts as possible. Yes, you can get anything to work but reducing 
the possible errors cases really increases reliability imho.

Kind regards,

Peter Kriens


> On 6 Aug 2019, at 16:28, Cristiano via osgi-dev  
> wrote:
> 
> Hello all,
> 
> I have a challenged POC to do in order to dockerize an existent OSGi
> based application and then deploy it to a Kubernetes based cloud.
> 
> I'm not totally aware of k8 features yet, so I have some doubts that I
> would like discuss here.
> 
> The main doubt is related to our existent upgrading process which
> currently we upload a R5 repository to a webserver and then a node
> management agent bundle access it and upgrades the necessary app bundles.
> 
> Many examples I saw in the web creates a docker image in their building
> process and delivery an image at each dev cycle. I don't like much of
> this idea, so initially I thought to mimic our existent process in a
> docker container just setting up a Volume in order to upload the newer
> repositories.
> 
> So I have created two docker images for testing this locally. One image
> contains the OSGI container and framework bundle (that do not change
> much) and other image that extends the other for only the apps bundles.
> it have worked well running locally.
> 
> Would this also work in K8 ?  What would happen if I need to scale and
> then create multiple PODS for this application?
> 
> thanks for any help.
> 
> best regards,
> 
> Cristiano
> 
> 
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev


Re: [osgi-dev] Conditional Target

2019-06-26 Thread Peter Kriens via osgi-dev
I was vaguely aware of this RFC but that is an extension to the DS Components 
isn't it? That will have more capabilities I assume since you're not restricted 
to the features of the current spec. 

This is basically a continuance of the aggregate service but with a bit smaller 
scope. The Aggregate Service had some basic limitations because not all 
components shared the same view at the same time.

It is a tricky area.

I read the RFC in detail. Kind regards,

Peter Kriens



> On 26 Jun 2019, at 14:29, BJ Hargrave  wrote:
> 
> This seems just like 
> https://github.com/osgi/design/blob/master/rfcs/rfc0242/rfc-0242-Condition-Service.pdf
>  
> <https://github.com/osgi/design/blob/master/rfcs/rfc0242/rfc-0242-Condition-Service.pdf>
>  
> Are you making an alternate design? Or did you not know of this RFC?
> --
> 
> BJ Hargrave
> Senior Technical Staff Member, IBM // office: +1 386 848 1781
> OSGi Fellow and CTO of the OSGi Alliance // mobile: +1 386 848 3788
> hargr...@us.ibm.com
>  
>  
> ----- Original message -
> From: Peter Kriens via osgi-dev 
> Sent by: osgi-dev-boun...@mail.osgi.org
> To: via bndtools-users , 
> osgi-dev@mail.osgi.org
> Cc:
> Subject: [EXTERNAL] [osgi-dev] Conditional Target
> Date: Wed, Jun 26, 2019 08:26
>  
> I've developed a service that you can use to block the activation of a DS 
> component until a set of other services are ready. This is related to the 
> whiteboard pattern when the sender wants to be sure a certain set of 
> whiteboard services are present. Normally you can only assert the properties 
> of 1 service but this allows you to use a filter to select the aggregated 
> properties of a group of services. An example is when you need a set of at 
> least 3 remote services where there are at least 2 unique regions. 
>  
> For example, you want to block until the average of the `foo` properties on 
> the registered Foo services is higher than 2:
>  
> @Reference( target="([avg]foo>=2)" )
> ConditionalTarget foos;
>  
> The Conditional Target object provides a direct reference to the services and 
> service references being tracked as well as the aggregated properties. 
>  
> It is described in 
>  
> https://github.com/aQute-os/biz.aQute.osgi.util/tree/master/biz.aQute.osgi.conditionaltarget
>  
> <https://github.com/aQute-os/biz.aQute.osgi.util/tree/master/biz.aQute.osgi.conditionaltarget>
>  
> And you can find the binary artifact in:
>  
> https://oss.sonatype.org/content/repositories/snapshots/biz/aQute/biz.aQute.osgi.conditionaltarget/
>  
> <https://oss.sonatype.org/content/repositories/snapshots/biz/aQute/biz.aQute.osgi.conditionaltarget/>
>  
> Feedback appreciated on the idea and execution. If people like this, I can 
> submit it to Apache Felix if they're interested.
>  
> Kind regards,
>  
> Peter Kriens
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev 
> <https://mail.osgi.org/mailman/listinfo/osgi-dev> 
>  
> 

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

[osgi-dev] Conditional Target

2019-06-26 Thread Peter Kriens via osgi-dev
I've developed a service that you can use to block the activation of a DS 
component until a set of other services are ready. This is related to the 
whiteboard pattern when the sender wants to be sure a certain set of whiteboard 
services are present. Normally you can only assert the properties of 1 service 
but this allows you to use a filter to select the aggregated properties of a 
group of services. An example is when you need a set of at least 3 remote 
services where there are at least 2 unique regions. 

For example, you want to block until the average of the `foo` properties on the 
registered Foo services is higher than 2:

@Reference( target="([avg]foo>=2)" )
ConditionalTarget  foos;

The Conditional Target object provides a direct reference to the services and 
service references being tracked as well as the aggregated properties. 

It is described in 


https://github.com/aQute-os/biz.aQute.osgi.util/tree/master/biz.aQute.osgi.conditionaltarget
 


And you can find the binary artifact in:


https://oss.sonatype.org/content/repositories/snapshots/biz/aQute/biz.aQute.osgi.conditionaltarget/
 


Feedback appreciated on the idea and execution. If people like this, I can 
submit it to Apache Felix if they're interested.

Kind regards,

Peter Kriens___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Micro version ignored when resolving, rationale?

2019-06-18 Thread Peter Kriens via osgi-dev
> Considering this, lowering a lower bound of an Import-Package statement when 
> resolving should be acknowledged as a bug. 
> 
I beg to differ ...

As said, you can set the consumer/provider policy to your desired strategy.


Kind regards,

Peter Kriens

> On 18 Jun 2019, at 10:33, Michael Lipp  wrote:
> 
> 
>> 
>> I expect there are two things at play. First, OSGi specifies things as you 
>> indicate. An import of [1.2.3.qualifier,2) must not select anything lower 
>> than 1.2.3.qualifier. Second, bnd does have heuristics that do drop the 
>> qualifier and micro part in calculating the import ranges from the exports 
>> on the class path.
> Thanks for the clarification, I think this explains things.
> 
>> [...]
>> 
>> Conclusion, the spec is perfect but the implementations apply heuristics and 
>> may have bugs.
> The specification says (or defines, if you like): "micro - A change that does 
> not affect the API, for example, a typo in a comment or a bug fix in an 
> implementation." It explicitly invites the developer to indicate a bug fix by 
> incrementing the micro part. There's no hint or requirement that he should 
> increment the minor part to reflect a bug fix. I do not find your statement 
> "The definition of the micro version is that it should not make a difference 
> in runtime" to be supported by the spec or the Semantic Versioning 
> Whitepaper. Actually, this interpretation would restrict the usage of the 
> micro part to documentation changes because every bug fix changes the runtime 
> behavior. This is, after all, what it is intended to do.
> 
> Considering this, lowering a lower bound of an Import-Package statement when 
> resolving should be acknowledged as a bug. 
> 
>  - Michael
> 
> 
> 
>> 
>> Kind regards,
>> 
>>  Peter Kriens
>> 
>>> On 17 Jun 2019, at 12:14, Michael Lipp via osgi-dev >> > wrote:
>>> 
>>> Hi,
>>> 
>>> I have in my repository a bundle A-2.0.1 that exports packages with
>>> version 2.0.1 and a bundle A-2.0.3 that exports these packages with
>>> version 2.0.3. Version A-2.0.3 fixes a bug.
>>> 
>>> I have a bundle B that imports the packages from A with import
>>> statements "... version=[2.0.3,3)" because the bug fix is crucial for
>>> the proper working of B.
>>> 
>>> Clicking on "Resolve" in bndtools, I get a resolution with bundle
>>> A-2.0.1. I understand that this complies with the specification ("It is
>>> recommended to ignore the micro part of the version because systems tend
>>> to become very rigid if they require the latest bug fix to be deployed
>>> all the time.").
>>> 
>>> What I don't understand is the rationale. I don't see any drawbacks in
>>> deploying the latest bug fix. Of course, there's always the risk of
>>> introducing a new bug with a new version, even if it is supposed to only
>>> fix a bug in the previous version. But if you're afraid of this, you may
>>> also not allow imports with version ranges such as "[1.0,2)" (for
>>> consumers).
>>> 
>>> In my case, I now have to distribute bundle B with a release note to
>>> configure the resolution in such a way that only A 2.0.3 and up is used.
>>> Something that you would expect to happen automatically looking at the
>>> import statement. And if I want to make sure that the release note is
>>> not overlooked, the only way seems to be to check the version of "A" at
>>> run-time in the activation of "B". This is downright ugly.
>>> 
>>>  - Michael
>>> 
>>> 
>>> ___
>>> OSGi Developer Mail List
>>> osgi-dev@mail.osgi.org 
>>> https://mail.osgi.org/mailman/listinfo/osgi-dev 
>>> 
> 

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Micro version ignored when resolving, rationale?

2019-06-18 Thread Peter Kriens via osgi-dev
As Tim indicates we need more information. 

I expect there are two things at play. First, OSGi specifies things as you 
indicate. An import of [1.2.3.qualifier,2) must not select anything lower than 
1.2.3.qualifier. Second, bnd does have heuristics that do drop the qualifier 
and micro part in calculating the import ranges from the exports on the class 
path.

When bnd builds a bundle it calculates the import range based on the package it 
was compiled against. Bnd finds the version looking at packageinfo, 
package-info.class, and the manifest. 

It then checks if that package is 'provided' or 'consumed' by the bundle, and 
from this information it calculates the range. The base version does indeed 
drop the qualifier and the micro version. I hope dropping the qualifier sounds 
logical to you? If you do not drop the qualifier you always need a fresh bundle 
with whatever you do. This is hell.

Since the micro version in a semantic version cannot make a difference it is 
logically safe to drop that one as well. The definition of the micro version is 
that it should not make a difference in runtime. Having a bug fix in a micro 
version is just plain wrong. Why have a spec when a tool cannot rely on its 
semantics?

At the time when we're starting with this this heuristic made things a lot 
easier to work with. Although the micro version is less volatile than the 
qualifier, our experience was that you ended up in a similar hell that the most 
minute change required everything to change. Especially since we did not have a 
resolver at that time. We've got almost 18 years of experience with this model 
and I think it has worked quite well so far. However, with the resolver going 
mainstream in the last few years maybe we need to revisit it. 

If you do not agree with the heuristic you can set the policy for the provider 
and consumer import range yourself. See 
https://bnd.bndtools.org/instructions/provider_policy.html 
 Personally I would 
not do this because your then trying to fix the original error (a bug fix in a 
micro version) in the wrong place. Although this can be a quick fix, in my 
experience these hacks tend to exponentially increase the complexity of the 
build over time since you can no longer rely on the established rules, forcing 
you to make specials everywhere over time.

In your case you or someone in your team did not apply the rules for semantic 
versioning. That happens, especially when you have to rely on external 
software. In that case you can manually apply the import range in the 
manifest/bnd.bnd file. This exact import range must be obeyed by the resolver. 

That said, if 2.0.1 and 2.0.3 are available then it would be nice if the 
resolver would prefer the highest possible version as a heuristic. In bnd we 
compile against the lowest version to keep the base as low as possible but in 
runtime we prefer the highest allowed version. I assumed that the bnd resolver 
had this behavior since we order the resources. Maybe there were other 
constraints that made 2.0.3 less attractive than 2.0.1. To know that we need to 
know more about the exact setup.

Conclusion, the spec is perfect but the implementations apply heuristics and 
may have bugs.

Kind regards,

Peter Kriens

> On 17 Jun 2019, at 12:14, Michael Lipp via osgi-dev  
> wrote:
> 
> Hi,
> 
> I have in my repository a bundle A-2.0.1 that exports packages with
> version 2.0.1 and a bundle A-2.0.3 that exports these packages with
> version 2.0.3. Version A-2.0.3 fixes a bug.
> 
> I have a bundle B that imports the packages from A with import
> statements "... version=[2.0.3,3)" because the bug fix is crucial for
> the proper working of B.
> 
> Clicking on "Resolve" in bndtools, I get a resolution with bundle
> A-2.0.1. I understand that this complies with the specification ("It is
> recommended to ignore the micro part of the version because systems tend
> to become very rigid if they require the latest bug fix to be deployed
> all the time.").
> 
> What I don't understand is the rationale. I don't see any drawbacks in
> deploying the latest bug fix. Of course, there's always the risk of
> introducing a new bug with a new version, even if it is supposed to only
> fix a bug in the previous version. But if you're afraid of this, you may
> also not allow imports with version ranges such as "[1.0,2)" (for
> consumers).
> 
> In my case, I now have to distribute bundle B with a release note to
> configure the resolution in such a way that only A 2.0.3 and up is used.
> Something that you would expect to happen automatically looking at the
> import statement. And if I want to make sure that the release note is
> not overlooked, the only way seems to be to check the version of "A" at
> run-time in the activation of "B". This is downright ugly.
> 
>  - Michael
> 
> 
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> 

Re: [osgi-dev] Migrating from OSGI to Microservices

2019-05-14 Thread Peter Kriens via osgi-dev
> On 14 May 2019, at 10:23, Christian Schneider via osgi-dev 
>  wrote:
> I agree that switching from local services to remote services is usually not 
> just a configuration change (even with RSA). Remote services have to be 
> designed with a completely different level of granularity and more 
> consideration to version compatibility. 
True. But since RSA I find that my service designs are much more DTO driven 
then before. Just make sure you do not exchange objects and you're usually fine.

That said, even if you don't, creating a facade is usually a lot easier than 
changing highly coupled domain code.

Kind regards,

Peter Kriens

> 
> Can you go into a bit more detail about how to create smarter services using 
> OSGi service dynamics? I think we might be able to extract some interesting 
> patterns there.
> 
> Christian
> 
> Am So., 12. Mai 2019 um 11:07 Uhr schrieb Toni Menzel via osgi-dev 
> mailto:osgi-dev@mail.osgi.org>>:
> It's not necessarily about one or the other. 
> What you mention is the k8s control plane that can take over your 
> traffic/scaling needs.
> However, the beauty of dynamic services on OSGi can be that you reflect the 
> "as-is" service topology (controlled and managed by k8s).
> OSGi services can understand and adapt to the availability (or 
> non-availability) dynamically. It gives you fine-grained control which leads 
> to smarter services.
> They are not black and white anymore (on or off) but can help re-reoute, 
> cache or otherwise mitigate short term changes in the service topology.
> 
> Note that this is complementary to remote services mentioned above. RSA help 
> you partition your (micro-)services easier. Though i think the story is not 
> as easy because.. you know.. the network is not reliable, etc. 
> 
> There is a lot more to this, and its a great discussion you are having here. 
> And its up to your needs if you want smarter services or can live with cheap 
> & dump instances (that ultimately leads you towards FaaS) - which can be 
> totally fine.
> 
> Toni
> 
> Toni Menzel / rebaze consultancy
> Alleestrasse 25 / 30167 Hannover / Germany
> M +49 171 6520284
> www.rebaze.com 
> 
> Software Engineering Therapy
> rebaze GmbH, Zollstrasse 6, 39114 Hannover, Germany 
> Managing Director: Toni Menzel
> Phone: +49  171 6520284 / E-Mail: h...@rebaze.com 
> 
> Registration Court: District Court of Stendal
> Registration Number: HRB 17993 
> Sales Tax (VAT) Registration Number: DE282233792
> 
> 
> 
> 
> 
> On Sat, May 11, 2019 at 10:05 PM Jürgen Albert via osgi-dev 
> mailto:osgi-dev@mail.osgi.org>> wrote:
> Depends on how you build your services. If you go the classic way, with a 
> normal HTTP API with e.g. docker compose or swarm, you would get load 
> balancing out of the box. They do a DNS round robin, the moment you scale a 
> service there. I believe Kubernetes does the same. The remote service Admin 
> does not really specify something like this. Here you can do the load 
> balancing on the application side. if you have e.g. 3 instances of the 
> billing container, you would find 3 service instances on your consumer side. 
> 
> Am 11/05/2019 um 10:36 schrieb Andrei Dulvac:
>> Hi Jürgen.
>> 
>> Ok, I missed the context of your previous mail - how modularity helps even 
>> if you have multiple containers. 
>> 
>> I get it... It's fast to develop and easy to switch boundaries. One other 
>> thing that I have on my mind: in your example, how easy would it be to scale 
>> the billing container? I assume it would be easy to route calls at least 
>> through a load balancer (I am not at all familiar with e. g. JaxRS) 
>> 
>> I might start to consider again the whole idea of microservices and using 
>> OSGi to modularize code inside them. 
>> 
>> 
>> On Fri, May 10, 2019, 22:26 Jürgen Albert > > wrote:
>> Hi Andrej,
>> 
>> I think you got us wrong. Nobody suggested to use OSGi or Http MIcroservices 
>> as you have called them. The Concepts behind both are modularity. The 
>> following Definition would fit both:
>> 
>> The smallest unit is a module that defines Service APIs as Interaction 
>> points. 
>> 
>> Thus OSGi is the best tool to create Http Microservices. I would prefer 
>> Distributed Microservices because you don't necessarily need HTTP for the 
>> Communication between your containers/processes. You can split your Bundles 
>> over as many Containers as you like and I will guarantee that is it easier 
>> to achieve and maintain then with anything else.
>> 
>> A short and every much simplyfied example to show what Todor was describing:
>> 
>> We have a small application that provides a REST API for an shop order 
>> process. We have written the following bundles:
>> 
>> rest.api -> contains the REST Resources; uses the order.api interfaces
>> order.api -> contains the order service interfaces
>> order.impl -> implementation of the order service, needs the billing.api 

Re: [osgi-dev] Launchpad

2019-03-15 Thread Peter Kriens via osgi-dev
There is something else in the pipeline as well that might help: 
biz.aQute.bnd.runtime.snapshot. It is not being built yet but you can build it 
yourself in the bnd workspace. If you place that bundle in your runtime it will 
make a snapshot in the current directory just before the framework stops. You 
can then drag from the file system and then drop this file on 
https://bnd.bndtools.org/snapshot.html

It is still under development but it provides a wealth of information. There 
are also gogo commands to make intermediate snapshots. I find this an 
invaluable tool for debugging OSGi issues. 

Glad you like it! Looking forward to your PR's! :-)

Kind regards,

Peter Kriens



> On 15 Mar 2019, at 08:43, Bram Pouwelse  wrote:
> 
> Hi Peter, 
> 
> I actually started to play with this cool new toy yesterday and I like it.  
> It gives a lot more control over the framework(s) launched than the test 
> support that has always been there. Not that long ago I was looking for 
> something like this but had to hand craft some code to get more than one 
> framework in a test but with Launchpad I can just use a bndrun again to 
> bootstrap multiple framework instances. So it was a perfect fit for my tests 
> and I can get rid of some framework boostrap boilerplate :).   
> 
> I did run into the "Version Sensitivity" issue when trying to obtain 
> ConfigurationAdmin but that's well explained on the page you just shared, if 
> only I'd read that page yesterday that would've saved me some time  
> 
> So this is definitely a useful tool, thanks Peter!
> 
> Kind regards, 
> Bram
> 
> 
> Op vr 15 mrt. 2019 om 08:28 schreef Peter Kriens via osgi-dev 
> mailto:osgi-dev@mail.osgi.org>>:
> I've recently been working on a bndtools testing framework based on some Pax 
> Exam envy. The result is _Launchpad_. It is a builder for a framework setup 
> using all the information from a bnd workspace and its projects. For each 
> test method you can then execute your tests without having to generate a test 
> bundle. Due to some deep class loading magic, the class space of the test 
> code (for example JUnit) is properly exported via the framework. Although 
> there are some pitfalls, sharing the classes this way works quite well.
> 
> Launchpad also contains an injector that you can use to inject services and 
> some key OSGi objects like BundleContext in your test object. A large number 
> of utility methods on Launchpad provide conveniences for testing. For 
> example, you can also hide services with one call.
> 
> Launchpad is agnostic of a testing framework. It has been tested with JUnit 
> but TestNG or other frameworks should be no problem.
> 
> This is all documented: 
> https://bnd.bndtools.org/chapters/315-launchpad-testing.html 
> <https://bnd.bndtools.org/chapters/315-launchpad-testing.html>
> 
> This is an ambitious test environment. There is now experience at one of my 
> customers but it clearly needs to go to a learning period. Almost all of what 
> is documented is in 4.2.0 which is just released, if you want the absolute 
> latest get the 4.3.0 snapshot. 
> 
> Launchpad is developed for the bnd Workspace model. I think it can be adapted 
> to the Maven model with the bndrun files since this mimics a Workspace 
> beneath the covers. However, that might require some work and surely some 
> documentation. Volunteers welcome.
> 
> Let me know if this is useful and file issues on 
> https://github.com/bndtools/bnd/issues 
> <https://github.com/bndtools/bnd/issues> when there are issues or really good 
> ideas.
> 
> Kind regards,
> 
> Peter Kriens
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org <mailto:osgi-dev@mail.osgi.org>
> https://mail.osgi.org/mailman/listinfo/osgi-dev 
> <https://mail.osgi.org/mailman/listinfo/osgi-dev>

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

[osgi-dev] Launchpad

2019-03-15 Thread Peter Kriens via osgi-dev
I've recently been working on a bndtools testing framework based on some Pax 
Exam envy. The result is _Launchpad_. It is a builder for a framework setup 
using all the information from a bnd workspace and its projects. For each test 
method you can then execute your tests without having to generate a test 
bundle. Due to some deep class loading magic, the class space of the test code 
(for example JUnit) is properly exported via the framework. Although there are 
some pitfalls, sharing the classes this way works quite well.

Launchpad also contains an injector that you can use to inject services and 
some key OSGi objects like BundleContext in your test object. A large number of 
utility methods on Launchpad provide conveniences for testing. For example, you 
can also hide services with one call.

Launchpad is agnostic of a testing framework. It has been tested with JUnit but 
TestNG or other frameworks should be no problem.

This is all documented: 
https://bnd.bndtools.org/chapters/315-launchpad-testing.html

This is an ambitious test environment. There is now experience at one of my 
customers but it clearly needs to go to a learning period. Almost all of what 
is documented is in 4.2.0 which is just released, if you want the absolute 
latest get the 4.3.0 snapshot. 

Launchpad is developed for the bnd Workspace model. I think it can be adapted 
to the Maven model with the bndrun files since this mimics a Workspace beneath 
the covers. However, that might require some work and surely some 
documentation. Volunteers welcome.

Let me know if this is useful and file issues on 
https://github.com/bndtools/bnd/issues when there are issues or really good 
ideas.

Kind regards,

Peter Kriens
___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev


[osgi-dev] Wrapping: Do not use the same bsn

2019-03-04 Thread Peter Kriens via osgi-dev
I just wasted an hour trying to figure out why my test was complaining about 
missing the `org.osgi.service.component` package. The Apache Felix SCR bundle I 
was using was clearly exporting and importing it. Since I was testing launchpad 
I blamed myself so I dug deep into things but could not figure it out.

Until I realized I had SCR in my repository twice. 



Reasonable people would assume that 2.0.10.v20170501-2007 is the same bundle as 
2.0.10 so I only looked at the first one, the original Apache Felix. Until I 
finally became desperate enough and looked at the Eclipse version, which indeed 
turned out to be a variation.

Apache Felix original:

Export-Package
  org.apache.felix.scr.component {version=1.1.0, imported-as=[1.1,1.2)}
  org.apache.felix.scr.info  {version=1.0.0, imported-as=[1.0,1.1)}
  org.osgi.service.component {version=1.3, imported-as=[1.3,1.4)}
  org.osgi.service.component.runtime {version=1.3, imported-as=[1.3,1.4)}
  org.osgi.service.component.runtime.dto {version=1.3, imported-as=[1.3,1.4)}
  org.osgi.util.function {version=1.0, imported-as=[1.0,2)}
  org.osgi.util.promise  {version=1.0, imported-as=[1.0,2)}

The Eclipse variation:

Export-Package
  org.apache.felix.scr.component {version=1.1.0, imported-as=[1.1,1.2)}
  org.apache.felix.scr.info  {version=1.0.0, imported-as=[1.0,1.1)}

This is just wrong. If you wrap a bundle and change its signature you **must** 
give it a new name because you fundamentally modify the public API. You do not 
own that other person's namespace! Just changing the micro/patch part of the 
version is just wrong. These kind of self inflicted wounds cause imho the most 
pain in software development.

That all said, it does make another case for the resolver. If I'd used that one 
I'd seen it immediately.

Kind regards,

Peter Kriens


___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Removing queued events in Push Steams

2019-02-27 Thread Peter Kriens via osgi-dev
I probably would use a (static?) priority set with a weak reference to the 
event object. (Or some key that uniquely identifies that object). The processor 
can then consult this set to see if the event has higher priority. A weak 
reference is needed to make sure that no events remain in this priority set 
without locking.  

PK

> On 27 Feb 2019, at 12:49, Alain Picard via osgi-dev  
> wrote:
> 
> Anyone has any insight here?
> Alain
> 
> On Tue, Feb 5, 2019 at 1:28 PM Alain Picard  > wrote:
> Hi,
> 
> We have cases where we need to process events with different priorities, and 
> such priority can change after the initial event having been queued, but not 
> yet processed.
> 
> For example, when there is an event that some content has changed, we 
> subscribe to this event and based on some conditions this might trigger the 
> need to update some diagrams in our case. This is considered a "background 
> priority" event, since we simply want to get it updated when we have some 
> cycles so as not being stuck doing it whenever someone requests such diagram 
> to view/edit it.
> 
> We also have events when someone for example requests to open such a diagram, 
> where we need to determine if it is up to date, and if it needs to be 
> updated, to get this pushed and processed as quickly as possible, as the user 
> is waiting.
> 
> So far we have setup 2 different push streams to support this. 
> 
> The issue here is that while this is high-priority event comes in, we need to 
> make sure that we can cancel any similar queued events from the low priority 
> stream, and possibly let it proceed if it is already being processed.
> 
> What is the best approach here ? Are we on the right track to start with?
> 
> Thanks
> Alain
> 
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Service component resolution time

2018-10-23 Thread Peter Kriens via osgi-dev
If your code sends its time in the resolver it generally means that the 
resolver has too much choice. This can happen that the same package is exported 
under different versions for example.

Certain configurations can turn pathological since the resolver is an NP 
complete problem. In my experience taking a good look at the package 
imports/exports and the service requirements tends to show that the resolver 
has too many choices to make.

Kind regards,

Peter Kriens





> On 22 Oct 2018, at 19:43, Alain Picard via osgi-dev  
> wrote:
> 
> We are experiencing some long startup time in our reafactored application 
> that is now heavily using SCR.
> 
> We have 125 projects with over 1200 Service Components and it takes about 2 
> minutes to get any output in the console. Some quick analysis shows that its 
> running the Felix ResolverImpl with something like 5-6 thread (that is with 
> Equinox).
> 
> I looked in that class but there doesn't seem to be any tracing code, only 
> what appears to be some debugging code.
> 
> Is this expected? If not what have others used to resolve this issue?
> 
> Thanks
> Alain
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev



smime.p7s
Description: S/MIME cryptographic signature
___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Is there already a standard for aggregating repositories?

2018-10-21 Thread Peter Kriens via osgi-dev
The bnd code contains an AggregateRepository class that aggregates a number of 
repositories defined by the OSGi Repository standard. It is actually used in 
the bnd resolver API.

Kind regards,

Peter Kriens



> On 21 Oct 2018, at 12:26, Mark Raynsford via osgi-dev 
>  wrote:
> 
> Hello!
> 
> As the subject says: Is there already a standard for aggregating
> repositories?
> 
> I'm putting together a small proof of concept API and application that
> allows a user to browse a set of bundle repositories [0], select bundles
> from those repositories, get all of the bundle dependencies resolved
> automatically, download all of the bundles, and finally start up an
> OSGi container with all of the selected and downloaded bundles.
> 
> I have all of that working (using the Bnd resolver), but the main pain
> point is the need to initially specify a (possibly rather large) set of
> repositories. I could very easily come up with an API or an XML schema
> that allows a server to deliver a set of repository URLs, but before I
> do that: Is there already a standard for this?
> 
> [0] A Compendium chapter 132 repository.
> 
> -- 
> Mark Raynsford | http://www.io7m.com
> 
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev



smime.p7s
Description: S/MIME cryptographic signature
___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Logger at startup

2018-08-28 Thread Peter Kriens via osgi-dev
>> If you buffer you always have temporal inconsistency, no matter what.
> That I do not understand?
> I don't see what is hard to understand. The log record arrives at the 
> appender long after it was reported. Which you may not care too much about. 
> But sometimes it's a pain in the backside.  
But there is no guarantee when it arrives in the appender anyway? And as long 
as it is properly serialized what would the problem be?


>> The solution I like is DON'T buffer. 
> Then you have ordering problems.
> There's no ordering issue when logging is setup with the framework.
Nope, because you took care of it. It is still there lurking to bite you though 
:-)

>  The only way NOT to buffer is make sure the log service starts with an 
> appender immediately. That's what the "immediate" logging approach provides. 
> It puts the log service setup and configuration immediately at the framework 
> level so that no buffering is used or required.
> That is a good solution but it makes logging extremely 'special'. Agree this 
> is a very foundational service but losing the possibility to have appenders 
> in bundles is a pretty big thing.
> Logging IS special. We all know that and there's no use pretending, 
> otherwise, we WOULD have solved this long ago.
Yes, and I am special too ... :-) And so is every developer's bundle I've seen 
in my life ...

>> Logging is an infrastructure detail unlike any other. It really must be 
>> bootstrapped with the runtime as early as possible. It's not ideal to handle 
>> it at the module layer, so I propose not to (thought it still integrates 
>> nicely with any bundle based logging APIs as demonstrated in the Felix 
>> logback integration tests).
> The problem with this approach is that it is also true for configuration 
> admin, and DS, and event admin, etc. etc. They are all infrastructure and 
> they all have ordering issues. Going down your route is imho a slippery 
> slope. Before you know you're back to a completely static model. And maybe 
> that is not bad, I just prefer a solution where I can debug all day on the 
> same framework that is never restarted. I also dislike hybrids since they 
> tend to become quite complex to handle on other levels.
> There's nothing in the immediate model that prevents from debugging all day 
> without restarts, that's just hyperbole. Logback supports configuration file 
> updates (if you like), so you can tweak and flex those loggers/appenders any 
> way you like at runtime (if you like).
Yes, of course you can, don't expect anything else. But each of those actions 
requires _completely_ different handling than the unified OSGi solution to 
those problems ...

Don't get me wrong, I understand that you have to live in the existing mess of 
Java and life is tough there. I probably would do the same in your shoes. 
However, I think it would be a pity that out of pragmatism we forget that an 
insane amount of problems are caused by our strenuous desire for backward 
compatibility.

Kind regards,

Peter Kriens


> 
> Sincerely,
> - Ray
>  
> 
> Kind regards,
> 
>   Peter Kriens
>  
> 
>> 
>> Sincerely,
>> - Ray
>>  
>> 
>> This issue, the problem of startup dependencies and configuration… Maybe 
>> there is something missing in the spec in terms of some kind of "boot 
>> service” that would handle these “common” problems. If the problems are so 
>> common, then maybe it is a sign of a gap in the specs… Just a thought.
>> 
>> 
>> Anyway, thanks!
>> =David
>> 
>> 
>> 
>>> On Aug 27, 2018, at 22:19, Raymond Auge >> > wrote:
>>> 
>>> There's setup details in the integration tests [1]
>>> 
>>> HTH
>>> - Ray
>>> 
>>> [1] https://github.com/apache/felix/tree/trunk/logback/itests 
>>> 
>>> 
>>> On Mon, Aug 27, 2018 at 9:15 AM, Raymond Auge >> > wrote:
>>> My personal favourite is the Apache Felix Logback [1] integration which 
>>> supports immediate logging when follow the correct steps. I feel it's the 
>>> best logging solution available.
>>> 
>>> There are a couple of prerequisites as outlined in the documentation. But 
>>> it's very simple to achieve your goal (NO BUFFERING OR MISSED LOG RECORDS)!
>>> 
>>> [1] 
>>> http://felix.apache.org/documentation/subprojects/apache-felix-logback.html 
>>> 
>>> 
>>> - Ray
>>> 
>>> On Mon, Aug 27, 2018 at 7:50 AM, BJ Hargrave via osgi-dev 
>>> mailto:osgi-dev@mail.osgi.org>> wrote:
>>> Equinox has the LogService implementation built into the framework, so it 
>>> starts logging very early.
>>>  
>>> In the alternate, for framework related information, you can write your own 
>>> launcher and it can add listeners for the framework event types.
>>> --
>>> 
>>> BJ Hargrave
>>> Senior Technical Staff Member, IBM // office: +1 386 848 1781
>>> OSGi Fellow and CTO of the OSGi Alliance // mobile: +1 386 

Re: [osgi-dev] Logger at startup

2018-08-28 Thread Peter Kriens via osgi-dev
> On 27 Aug 2018, at 22:40, Raymond Auge via osgi-dev  
> wrote:
> On Mon, Aug 27, 2018 at 4:19 PM, David Leangen via osgi-dev 
> mailto:osgi-dev@mail.osgi.org>> wrote:
> Hi Peter and Ray,
> Thank you very much for the suggestions!
> I’ll take a look at the code. It is my hope that I don’t need to pull in any 
> additional dependencies. Maybe I’ll get some hints in the code as to how not 
> to lose any logs.
> If I am reading the Felix Logback page correctly, It’s a bit odd that the 
> OSGi log service, when started up normally, misses some information during 
> startup. In my mind, that seems like a problem with the spec.
> It's not a problem with the spec. The problem is with the buffering which the 
> spec cannot avoid.
> You can't accurately predict how much buffer is needed
> If you buffer to much you _may_ have memory issues
Yes, but if you limit the buffering to the framework start and the log service 
up and running you can make a reasonable guess how long the buffer is. Only 
when out of the ordinary stuff happens you need a large buffer. However, than 
almost invariably the early records contain sufficient information to diagnose 
wtf happened. Second, once the log service is up and running the buffer can be 
flushed.

> If you buffer to little you _may_ lose records
> If you buffer you always have temporal inconsistency, no matter what.
That I do not understand?


> The solution I like is DON'T buffer. 
Then you have ordering problems.

> The only way NOT to buffer is make sure the log service starts with an 
> appender immediately. That's what the "immediate" logging approach provides. 
> It puts the log service setup and configuration immediately at the framework 
> level so that no buffering is used or required.
That is a good solution but it makes logging extremely 'special'. Agree this is 
a very foundational service but losing the possibility to have appenders in 
bundles is a pretty big thing.

> Logging is an infrastructure detail unlike any other. It really must be 
> bootstrapped with the runtime as early as possible. It's not ideal to handle 
> it at the module layer, so I propose not to (thought it still integrates 
> nicely with any bundle based logging APIs as demonstrated in the Felix 
> logback integration tests).
The problem with this approach is that it is also true for configuration admin, 
and DS, and event admin, etc. etc. They are all infrastructure and they all 
have ordering issues. Going down your route is imho a slippery slope. Before 
you know you're back to a completely static model. And maybe that is not bad, I 
just prefer a solution where I can debug all day on the same framework that is 
never restarted. I also dislike hybrids since they tend to become quite complex 
to handle on other levels.

Kind regards,

Peter Kriens
 

> 
> Sincerely,
> - Ray
>  
> 
> This issue, the problem of startup dependencies and configuration… Maybe 
> there is something missing in the spec in terms of some kind of "boot 
> service” that would handle these “common” problems. If the problems are so 
> common, then maybe it is a sign of a gap in the specs… Just a thought.
> 
> 
> Anyway, thanks!
> =David
> 
> 
> 
>> On Aug 27, 2018, at 22:19, Raymond Auge > > wrote:
>> 
>> There's setup details in the integration tests [1]
>> 
>> HTH
>> - Ray
>> 
>> [1] https://github.com/apache/felix/tree/trunk/logback/itests 
>> 
>> 
>> On Mon, Aug 27, 2018 at 9:15 AM, Raymond Auge > > wrote:
>> My personal favourite is the Apache Felix Logback [1] integration which 
>> supports immediate logging when follow the correct steps. I feel it's the 
>> best logging solution available.
>> 
>> There are a couple of prerequisites as outlined in the documentation. But 
>> it's very simple to achieve your goal (NO BUFFERING OR MISSED LOG RECORDS)!
>> 
>> [1] 
>> http://felix.apache.org/documentation/subprojects/apache-felix-logback.html 
>> 
>> 
>> - Ray
>> 
>> On Mon, Aug 27, 2018 at 7:50 AM, BJ Hargrave via osgi-dev 
>> mailto:osgi-dev@mail.osgi.org>> wrote:
>> Equinox has the LogService implementation built into the framework, so it 
>> starts logging very early.
>>  
>> In the alternate, for framework related information, you can write your own 
>> launcher and it can add listeners for the framework event types.
>> --
>> 
>> BJ Hargrave
>> Senior Technical Staff Member, IBM // office: +1 386 848 1781
>> OSGi Fellow and CTO of the OSGi Alliance // mobile: +1 386 848 3788
>> hargr...@us.ibm.com 
>>  
>>  
>> - Original message -
>> From: David Leangen via osgi-dev > >
>> Sent by: osgi-dev-boun...@mail.osgi.org 
>> 
>> To: osgi-dev@mail.osgi.org 
>> Cc:
>> Subject: 

Re: [osgi-dev] Logger at startup

2018-08-27 Thread Peter Kriens via osgi-dev
The v2Archive contains a logger that might fit your needs. It exports slf4j.api 
and has a custom implementation that records everything in a static queue (oh 
horror!) until the bundle is actually started. I.e. other bundles use the 
static slf4j API where the first will create the implementation. Since this is 
on class loading level you should get output from every slf4j logging bundle. 
(You won't get output from the Framework itself.)


https://github.com/osgi/v2archive.osgi.enroute/blob/master/osgi.enroute.logger.simple.provide
 
r


There are Gogo commands but I recall they had some issue.

Kind regards,

Peter Kriens



> On 26 Aug 2018, at 21:05, David Leangen via osgi-dev  
> wrote:
> 
> 
> Hi!
> 
> I’m sure that this question has been asked before, but I did not successfully 
> find an answer anywhere. It applies to both R6 and R7 logging.
> 
> I would like to set up diagnostics so I can figure out what is happening 
> during system startup. however, by the time the logger starts, I have already 
> missed most of the messages that I needed to receive and there is no record 
> of the things I want to see. Another oddity is that even after the logger has 
> started, some messages are not getting logged. I can only assume that there 
> is some concurrency/dynamics issue at play.
> 
> In any case, other than using start levels, is there a way of ensuring that 
> the LogService (or Logger) is available when I need it?
> 
> 
> Thanks!
> =David
> 
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev



smime.p7s
Description: S/MIME cryptographic signature
___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Docker configuration via environment variables

2018-08-21 Thread Peter Kriens via osgi-dev
I think you get the point …

A simple macro processor is ~10k and would go a long way to address the far 
majority of the common requirements. And there is more than 6 years experience 
with the model already :-)

Kind regards,

Peter Kriens

> On 21 Aug 2018, at 12:09, Christian Schneider  wrote:
> 
> I agree this easily can cause ordering issues.
> In the best case these can cause components to be started with wrong 
> configuration first and with the correct one later. This can at least cause 
> some error messages but might also cause more serious issues.
> 
> For a cloud deployment were config is mainly static I hope we can provide a 
> solution that avoids restarts in most cases. Env variable substitution could 
> be a core feature of the configurator as it is very common.
> 
> Christian
> 
> 
> Am Di., 21. Aug. 2018 um 10:22 Uhr schrieb Peter Kriens via osgi-dev 
> mailto:osgi-dev@mail.osgi.org>>:
>> On 21 Aug 2018, at 10:11, Tim Ward via osgi-dev > <mailto:osgi-dev@mail.osgi.org>> wrote:
>> Just another vote in favour of the ConfigurationPlugin model - you can use 
>> this to post-process configurations wherever they come from (meaning it 
>> isn’t tied to the Configurer or Configurator).
>> A configuration plugin that does this sort of work is easy to write, and if 
>> using DS could be done in a lot less than 100 LoC. It can also look at 
>> things other than environment variables if you want, and if/else logic is 
>> much easier to write/maintain in Java code than it is in macros in a JSON 
>> file!
> 
> It is only easier after you sold the ordering problem of the 
> 
> * Configurator, 
> * Configuration bundle, and 
> * Configuration plugin bundle … 
> 
> It also adds quite a bit of complexity by:
> 
> * Separating the rules from the actual configuration, and 
> * Adding extra bundles.
> 
> This additional complexity is only worth it if you can reuse the rules in 
> many different places. Hmm. Maybe a configuration plugin with a macro 
> processor? :-)
> 
>   Peter Kriens
> 
>> 
>> Best Regards,
>> 
>> Tim
>> 
>>> On 20 Aug 2018, at 17:08, Mark Hoffmann via osgi-dev 
>>> mailto:osgi-dev@mail.osgi.org>> wrote:
>>> 
>>> Hi all,
>>> 
>>> Carsten Ziegeler pointed us to the Configuration Plugin Services, that are 
>>> part of the ConfigurationAdmin specification. Together with the 
>>> Configurator specification, it could be possible to do that substitution in 
>>> such an plugin.
>>> Regards,
>>> 
>>> Mark
>>> 
>>> Am 20.08.2018 um 17:56 schrieb Christian Schneider via osgi-dev:
>>>> I think this would be a good extension to the configurator to also allow 
>>>> env variable replacement.
>>>> Actually I hoped it would already do this...
>>>> WDYT?
>>>> 
>>>> Christian
>>>> 
>>>> Am Mo., 20. Aug. 2018 um 17:05 Uhr schrieb Peter Kriens via osgi-dev 
>>>> mailto:osgi-dev@mail.osgi.org>>:
>>>> Are you using v2Archive enRoute or the new one?
>>>> 
>>>> The v2Archive OSGi enRoute has the simple Configurer (the predecessor of 
>>>> the OSGi R7 Configurator but with, according to some, a better name :-). 
>>>> It runs things through the macro processor you could therefore use 
>>>> environment variables to make the difference. 
>>>> 
>>>> E.g. ${env;XUZ} in the json files. Since it also supports ${if} you can 
>>>> eat your heart out! You can set environment variables in docker with -e in 
>>>> the command line when you start the container. You can also use @{ instead 
>>>> of ${ to not run afoul of the bnd processing that can happen at build 
>>>> time. I.e. the Configurer replaces all @{…} with ${…}.
>>>> 
>>>> If you are using the new R7 Configurator then you are on your own ...
>>>> 
>>>> Kind regards,
>>>> 
>>>> Peter Kriens
>>>> 
>>>> 
>>>> 
>>>> 
>>>> > On 18 Aug 2018, at 18:51, Randy Leonard via osgi-dev 
>>>> > mailto:osgi-dev@mail.osgi.org>> wrote:
>>>> > 
>>>> > To all:
>>>> > 
>>>> > We are at the point where we are deploying our OSGI enRoute applications 
>>>> > via Docker.
>>>> > 
>>>> > - A key sticking point is the syntax for embedding environment variables 
>>>> > within our configuration.json files.  
>>>&

Re: [osgi-dev] Docker configuration via environment variables

2018-08-21 Thread Peter Kriens via osgi-dev
> On 21 Aug 2018, at 10:11, Tim Ward via osgi-dev  
> wrote:
> Just another vote in favour of the ConfigurationPlugin model - you can use 
> this to post-process configurations wherever they come from (meaning it isn’t 
> tied to the Configurer or Configurator).
> A configuration plugin that does this sort of work is easy to write, and if 
> using DS could be done in a lot less than 100 LoC. It can also look at things 
> other than environment variables if you want, and if/else logic is much 
> easier to write/maintain in Java code than it is in macros in a JSON file!

It is only easier after you sold the ordering problem of the 

* Configurator, 
* Configuration bundle, and 
* Configuration plugin bundle … 

It also adds quite a bit of complexity by:

* Separating the rules from the actual configuration, and 
* Adding extra bundles.

This additional complexity is only worth it if you can reuse the rules in many 
different places. Hmm. Maybe a configuration plugin with a macro processor? :-)

Peter Kriens

> 
> Best Regards,
> 
> Tim
> 
>> On 20 Aug 2018, at 17:08, Mark Hoffmann via osgi-dev > <mailto:osgi-dev@mail.osgi.org>> wrote:
>> 
>> Hi all,
>> 
>> Carsten Ziegeler pointed us to the Configuration Plugin Services, that are 
>> part of the ConfigurationAdmin specification. Together with the Configurator 
>> specification, it could be possible to do that substitution in such an 
>> plugin.
>> Regards,
>> 
>> Mark
>> 
>> Am 20.08.2018 um 17:56 schrieb Christian Schneider via osgi-dev:
>>> I think this would be a good extension to the configurator to also allow 
>>> env variable replacement.
>>> Actually I hoped it would already do this...
>>> WDYT?
>>> 
>>> Christian
>>> 
>>> Am Mo., 20. Aug. 2018 um 17:05 Uhr schrieb Peter Kriens via osgi-dev 
>>> mailto:osgi-dev@mail.osgi.org>>:
>>> Are you using v2Archive enRoute or the new one?
>>> 
>>> The v2Archive OSGi enRoute has the simple Configurer (the predecessor of 
>>> the OSGi R7 Configurator but with, according to some, a better name :-). It 
>>> runs things through the macro processor you could therefore use environment 
>>> variables to make the difference. 
>>> 
>>> E.g. ${env;XUZ} in the json files. Since it also supports ${if} you can eat 
>>> your heart out! You can set environment variables in docker with -e in the 
>>> command line when you start the container. You can also use @{ instead of 
>>> ${ to not run afoul of the bnd processing that can happen at build time. 
>>> I.e. the Configurer replaces all @{…} with ${…}.
>>> 
>>> If you are using the new R7 Configurator then you are on your own ...
>>> 
>>> Kind regards,
>>> 
>>> Peter Kriens
>>> 
>>> 
>>> 
>>> 
>>> > On 18 Aug 2018, at 18:51, Randy Leonard via osgi-dev 
>>> > mailto:osgi-dev@mail.osgi.org>> wrote:
>>> > 
>>> > To all:
>>> > 
>>> > We are at the point where we are deploying our OSGI enRoute applications 
>>> > via Docker.
>>> > 
>>> > - A key sticking point is the syntax for embedding environment variables 
>>> > within our configuration.json files.  
>>> > - For example, a developer would set a hostName to ‘localhost’ for 
>>> > development, but this same environment variable would be different for 
>>> > QA, UAT, and Production environments
>>> > - I presume this is the best way of allowing the same container to be 
>>> > deployed in different environments without modification?
>>> > - Suggestions and/or examples are appreciated.
>>> > 
>>> > 
>>> > 
>>> > Thanks,
>>> > Randy Leonard
>>> > 
>>> > ___
>>> > OSGi Developer Mail List
>>> > osgi-dev@mail.osgi.org <mailto:osgi-dev@mail.osgi.org>
>>> > https://mail.osgi.org/mailman/listinfo/osgi-dev 
>>> > <https://mail.osgi.org/mailman/listinfo/osgi-dev>
>>> 
>>> ___
>>> OSGi Developer Mail List
>>> osgi-dev@mail.osgi.org <mailto:osgi-dev@mail.osgi.org>
>>> https://mail.osgi.org/mailman/listinfo/osgi-dev 
>>> <https://mail.osgi.org/mailman/listinfo/osgi-dev>
>>> 
>>> -- 
>>> -- 
>>> Christian Schneider
>>> http://www.liq

Re: [osgi-dev] Docker configuration via environment variables

2018-08-20 Thread Peter Kriens via osgi-dev
Are you using v2Archive enRoute or the new one?

The v2Archive OSGi enRoute has the simple Configurer (the predecessor of the 
OSGi R7 Configurator but with, according to some, a better name :-). It runs 
things through the macro processor you could therefore use environment 
variables to make the difference. 

E.g. ${env;XUZ} in the json files. Since it also supports ${if} you can eat 
your heart out! You can set environment variables in docker with -e in the 
command line when you start the container. You can also use @{ instead of ${ to 
not run afoul of the bnd processing that can happen at build time. I.e. the 
Configurer replaces all @{…} with ${…}.

If you are using the new R7 Configurator then you are on your own ...

Kind regards,

Peter Kriens




> On 18 Aug 2018, at 18:51, Randy Leonard via osgi-dev  
> wrote:
> 
> To all:
> 
> We are at the point where we are deploying our OSGI enRoute applications via 
> Docker.
> 
> - A key sticking point is the syntax for embedding environment variables 
> within our configuration.json files.  
> - For example, a developer would set a hostName to ‘localhost’ for 
> development, but this same environment variable would be different for QA, 
> UAT, and Production environments
> - I presume this is the best way of allowing the same container to be 
> deployed in different environments without modification?
> - Suggestions and/or examples are appreciated.
> 
> 
> 
> Thanks,
> Randy Leonard
> 
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev



smime.p7s
Description: S/MIME cryptographic signature
___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Question about consistency and visibility

2018-08-14 Thread Peter Kriens via osgi-dev
Your understanding is correct with respect to the _service_. DS guarantees that 
activate is called before the service is registered and thus available to 
others.  The service is unregistered before you deactivate is called. Nulling 
fields is generally unnecessary unless you want to ensure an NPE if an object 
uses your service after unregistering.

Kind regards,

Peter Kriens

> On 14 Aug 2018, at 05:20, David Leangen via osgi-dev  
> wrote:
> 
> 
> Hi!
> 
> In a concurrent system, if a class is immutable, the problem is simplified 
> and the class can be used without fear by multiple threads because (i) it’s 
> state does not change, and (2) it’s state is guaranteed to be visible;
> 
> Example:
> 
> /**
>  * The class is immutable because the fields are both immutable types
>  * and are "private + final”. The fields are guaranteed to be visible
>  * to all threads after construction. In other words, there is a
>  * “happens-before” constraint on the fields.
>  */
> public class SimpleImmutableClass {
> private final String value1;
> private final int value2;
> 
> public SimpleImmutableClass( String aString, int anInt ) {
> value1 = aString;
> value2 = anInt;
> }
> 
> public String getValue1() {
> return value1;
> }
> 
> public int getvalue2() {
> return value2;
> }
> }
> 
> My understanding is that DS will provide the same happens-before constraint 
> to the fields in the following service, so presuming that there is no method 
> exposed to change the field values, the service is effectively immutable and 
> can be used without fear in a concurrent context. So in the following, value1 
> and value2 are guaranteed to be visible to all threads thanks to the 
> happens-before constraint placed on the fields during activation:
> 
> /**
>  * The LogService is basically just added to show that the component is used
>  * in a static way, as only a static component can be effectively immutable.
>  */
> @Component
> public class EffectivelyImmutableService {
> private String value1;
> private int value2;
> 
> @Reference private LogService logger;
> 
> @Activate
> void activate( Map properties ) {
> value1 = (String)properties.get( "value1" );
> value2 = (int)properties.get( "value2" );
> }
> 
> /**
>  * H, but if an instance is never reused, then wouldn't it be 
> completely
>  * unnecessary to deactivate()??
>  */
> void deactivate() {
> value1 = null;
> value2 = -1;
> }
> 
> public String getValue1() {
> logger.log( LogService.LOG_INFO, String.format( "Value of String is 
> %s", value1 ) );
> return value1;
> }
> 
> public int getvalue2() {
> logger.log( LogService.LOG_INFO, String.format( "Value of int is %s", 
> value2 ) );
> return value2;
> }
> }
> 
> 
> Is somebody able to confirm my understanding?
> 
> Thanks!!
> =David
> 
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev



smime.p7s
Description: S/MIME cryptographic signature
___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Felix resolver takes forever to resolve dependencies

2018-08-02 Thread Peter Kriens via osgi-dev
This is a Felix issue :-( 

* Did you find the -runbundles with the Bndtools resolver? How long did that 
take?
* Did you upgrade to the latest Felix?
* Did you try an older version?
* Did you try Equinox?

You might want to post it on the Apache Felix list. They probably have some 
secret system properties you can set to get more diagnostic info.

Sorry I can't help you more … 

Kind regards,

Peter Kriens




> On 2 Aug 2018, at 16:41, Nhut Thai Le via osgi-dev  
> wrote:
> 
> Hello,
> 
> We are trying to integrate Keycloak admin-client and zk into our web app 
> running on felix using bndtool.
> 
> If we use our web app with Keycloak admin-client alone, it's fine.
> 
> If we use our web app with zk alone, it's also fine.
> 
> When adding both Keycloak admin-client and zk, we are not able to start the 
> container from bndrun. Looks like the framework resolver get stuck in 
> resolving dependencies. Here is the stack of the main thread:
> Thread [main] (Suspended) 
>   waiting for: AtomicInteger  (id=29) 
>   Object.wait(long) line: not available [native method]   
>   AtomicInteger(Object).wait() line: 502  
>   ResolverImpl$EnhancedExecutor.await() line: 2523
>   ResolverImpl.calculatePackageSpaces(ResolveSession, Candidates, 
> Collection) line: 1217
>   ResolverImpl.checkConsistency(ResolveSession, Candidates, 
> Map) line: 572  
>   ResolverImpl.findValidCandidates(ResolveSession, 
> Map) line: 532   
>   ResolverImpl.doResolve(ResolveSession) line: 395
>   ResolverImpl.resolve(ResolveContext, Executor) line: 377
>   ResolverImpl.resolve(ResolveContext) line: 331  
>   StatefulResolver.resolve(Set, Set) 
> line: 478
>   Felix.resolveBundles(Collection) line: 4108 
>   FrameworkWiringImpl.resolveBundles(Collection) line: 133
>   PackageAdminImpl.resolveBundles(Bundle[]) line: 267 
>   Launcher.startBundles(List) line: 489   
>   Launcher.activate() line: 423   
>   Launcher.run(String[]) line: 301
>   Launcher.main(String[]) line: 147   
> 
> A small note here is that Keycloak admin-client uses resteasy which is not an 
> osgi bundle so I wrap both admin-client, resteasy and its dependencies in an 
> osgi bundle and enabling service loader mediator header so that the service 
> files provided in the resteasy dependencies can be loaded.
> 
> For debugging I downloaded the source of org.apache.felix.framework but it 
> has no resolver/RersolverImpl class
> 
> I'm not sure what to do to debug this.
> 
> Thai
> 
> 
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Life-cycle race condition

2018-08-02 Thread Peter Kriens via osgi-dev
Yup, it got a bit windy ;-) I put it on my website as a blog since I've no good 
other place at the moment.

http://aqute.biz/2018/08/02/the-service-window.html 


Let me know if things are unclear. Kind regards,

Peter Kriens


> On 2 Aug 2018, at 11:58, David Leangen via osgi-dev  
> wrote:
> 
> 
> Wow! That is a lot to digest.
> 
> I’ll need to get back to you in a few days/weeks/months/years. :-D
> 
> Thanks so much!!
> 
> 
> Cheers,
> =David
> 
> 
> 
> 
>> On Aug 2, 2018, at 18:38, Peter Kriens > > wrote:
>> 
>> 
>> ## Keep Passing the Open Windows
>> 
>> You did read the classic [v2Archive OSGi enRoute App note][5] about this 
>> topic? It has been archived by the OSGi to [v2Archive OSGi enRoute web 
>> site][3]. It handles a lot of similar cases. There is an accompanying 
>> workspace [v2Archive OSGi enRoute osgi.enroute.examples.concurrency 
>> ][7]
>> 
>> Anyway, I am not sure if you want to solve this pragmatic or pure?
>> 
>> ## Pragmatic 
>> 
>> Pragmatic means there is a tiny chance you hit the window where you check if 
>> the MyService is unregistered and then use it. If you're really unlucky you 
>> just hit the unregistration after you checked it but before you can use it. 
>> It works when the unregistration of MyService is rare and the work is long. 
>> Yes, it can fail but so can anything so you should be prepared for it. 
>> 
>> Pragmatic works best as follows:
>> 
>>@Component
>>public class MyClass extends Thread {   
>>   @Reference MyService myService;
>>
>>   @Activate void activate()  { start(); }
>>   @Deactivate void deactivate()  { interrupt(); }
>>
>>   public void run() {
>>  while (!isInterrupted()) {
>> try {
>> MyResult result = doHardWork();
>> if (!isInterrupted())
>> myService.setResult(result);
>> } catch (Exception e) { /* TODO */ }
>>  }
>>   }
>>}
>> 
>> Clearly there is a race condition. 
>> 
>> 
>> 
>> 
>> ## Pure 
>> 
>> I once had a use case where we had whiteboard listeners that received 
>> events. The frequency and some not so good event listeners that took too 
>> much time in their callback. This created a quite long window where it could 
>> fail so it often did. For that use case I created a special highly optimized 
>> class that could delay the removal of the listener while it was being 
>> dispatched. To make it have absolutely minimal overhead was tricky, I even 
>> made an Alloy model of it that found some design errors. Anyway, sometimes 
>> you have pick one of the bad sides, this was one where delaying the 
>> deactivate was worth it.
>> 
>> So how would you make this 'purer' by delaying the deactivation until you 
>> stopped using it? Since the service is still supposed to be valid during 
>> deactivate we could make the setResult() and the deactivate() methods 
>> exclude each other. That is, we need to make sure that no interrupt can 
>> happen when we check for the isInterrupted() and call myService.setResult(). 
>> We could use heavy locks but synchronized works fine for me when you realize 
>> some of its caveats:
>> 
>> * Short blocks
>> * Ensure you cannot create deadlocks
>> 
>> So there must be an explicit contract that the MyService is not going to 
>> stay away for a long time nor call lots of other unknown code that could 
>> cause deadlocks. After all, we're blocking the deactivate() method which is 
>> very bad practice in general. So you will trade off one purity for another.
>> 
>>@Component
>>public class MyClass extends Thread {   
>>   @Reference MyService myService;
>>
>>   @Activate void activate()  { start(); }
>>   @Deactivate synchronized void deactivate() { interrupt(); }
>>
>>   public void run() {
>>  while (!isInterrupted()) {
>> try {
>> MyResult result = doHardWork();
>>  synchronized(this) {
>> if (!isInterrupted()) {
>> myService.setResult(result);
>>  }
>>  }
>> } catch (Exception e) { /* TODO */ }
>>  }
>>   }
>>}
>> 
>> This guarantees what you want … However (you knew this was coming!) there is 
>> a reason the service gets deactivated. Even though the _service_ is still 
>> valid at that point, there is a reason the _service object_ indicated its 
>> unwillingness to play. For example, if MyService was remoted then the 
>> connection might have been lost. In general, when you call a service you 
>> should be prepared that it fails. (That is why you should always take 
>> exceptions into account even if they're not checked.)
>> 
>> ## Better API
>> 
>> The best solution is usually to turn the problem around. This 

Re: [osgi-dev] Life-cycle race condition

2018-08-02 Thread Peter Kriens via osgi-dev

## Keep Passing the Open Windows

You did read the classic [v2Archive OSGi enRoute App note][5] about this topic? 
It has been archived by the OSGi to [v2Archive OSGi enRoute web site][3]. It 
handles a lot of similar cases. There is an accompanying workspace [v2Archive 
OSGi enRoute osgi.enroute.examples.concurrency 
][7]

Anyway, I am not sure if you want to solve this pragmatic or pure?

## Pragmatic 

Pragmatic means there is a tiny chance you hit the window where you check if 
the MyService is unregistered and then use it. If you're really unlucky you 
just hit the unregistration after you checked it but before you can use it. It 
works when the unregistration of MyService is rare and the work is long. Yes, 
it can fail but so can anything so you should be prepared for it. 

Pragmatic works best as follows:

   @Component
   public class MyClass extends Thread {   
  @Reference MyService myService;
   
  @Activate void activate() { start(); }
  @Deactivate void deactivate() { interrupt(); }
   
  public void run() {
 while (!isInterrupted()) {
try {
MyResult result = doHardWork();
if (!isInterrupted())
myService.setResult(result);
} catch (Exception e) { /* TODO */ }
 }
  }
   }

Clearly there is a race condition. 




## Pure 

I once had a use case where we had whiteboard listeners that received events. 
The frequency and some not so good event listeners that took too much time in 
their callback. This created a quite long window where it could fail so it 
often did. For that use case I created a special highly optimized class that 
could delay the removal of the listener while it was being dispatched. To make 
it have absolutely minimal overhead was tricky, I even made an Alloy model of 
it that found some design errors. Anyway, sometimes you have pick one of the 
bad sides, this was one where delaying the deactivate was worth it.

So how would you make this 'purer' by delaying the deactivation until you 
stopped using it? Since the service is still supposed to be valid during 
deactivate we could make the setResult() and the deactivate() methods exclude 
each other. That is, we need to make sure that no interrupt can happen when we 
check for the isInterrupted() and call myService.setResult(). We could use 
heavy locks but synchronized works fine for me when you realize some of its 
caveats:

* Short blocks
* Ensure you cannot create deadlocks

So there must be an explicit contract that the MyService is not going to stay 
away for a long time nor call lots of other unknown code that could cause 
deadlocks. After all, we're blocking the deactivate() method which is very bad 
practice in general. So you will trade off one purity for another.

   @Component
   public class MyClass extends Thread {   
  @Reference MyService myService;
   
  @Activate void activate() { start(); }
  @Deactivate synchronized void deactivate(){ interrupt(); }
   
  public void run() {
 while (!isInterrupted()) {
try {
MyResult result = doHardWork();
synchronized(this) {
if (!isInterrupted()) {
myService.setResult(result);
}
}
} catch (Exception e) { /* TODO */ }
 }
  }
   }

This guarantees what you want … However (you knew this was coming!) there is a 
reason the service gets deactivated. Even though the _service_ is still valid 
at that point, there is a reason the _service object_ indicated its 
unwillingness to play. For example, if MyService was remoted then the 
connection might have been lost. In general, when you call a service you should 
be prepared that it fails. (That is why you should always take exceptions into 
account even if they're not checked.)

## Better API

The best solution is usually to turn the problem around. This clearly can only 
happen when you can influence the API so that is often not a choice. If you 
can, you can pass a Promise to the myService and calculate in the background. 
Clearly that means you keep churning doing the hard work. Unless the 
calculation is very expensive and the unregistration happens often, doing the 
calculation unnecessary should normally have no practical concerns. If it is, 
you might want to consider CompletableFuture instead of Promise since it has a 
cancel() method. (We rejected a cancel since it makes the Promise mutable, but 
admittedly it is useful. However, it has the same race issues as we discuss 
here.)

   @Component
   public class MyClass {
   
  @Reference MyService myService;
  @Reference PromiseFactory promiseFactory;

  @Activate void activate() { 
Promise result = promiseFactory.submit( this::doHardWork );

Re: [osgi-dev] Dealing with bouncing

2018-07-23 Thread Peter Kriens via osgi-dev


Ok … on the top of my head …


public interface Bar {
void m1();
void m2();
}

@Component 
public class BarImpl implements Bar {
Deferred   delegate = new Deferred<>();

@Reference
void setExec( Executor e ) {
delegate.resolve( new BarImpl2(e) );
}


public void m1() {
delegate.getPromise().m1();
}   

public void m2() {
delegate.getPromise().m2();
}   
}

This works for you?

Kind regards,

Peter Kriens



> On 22 Jul 2018, at 22:51, David Leangen via osgi-dev  
> wrote:
> 
> 
> Hi Peter,
> 
> Thanks for the tip.
> 
> I’m not quite getting it. Would you be able to direct me to an example?
> 
> Thanks!
> =David
> 
> 
> 
>> On Jul 22, 2018, at 21:49, Peter Kriens  wrote:
>> 
>> In some cases (when the extra complexity was warranted) I let the component 
>> class act as a proxy to a delegate. I then get the delegate from a  Promise. 
>> So you just forward every method in your service interface to the delegate. 
>> There is a function in Eclipse that will create the delegation methods.
>> 
>> In general you want to afford this complexity and for example use a simple 
>> init() method that blocks until init is done. However, the delegate has some 
>> nice qualities if you switch more often than just at init.
>> 
>> Kind regards,
>> 
>>  Peter Kriens
>> 
>>> On 22 Jul 2018, at 10:35, David Leangen via osgi-dev 
>>>  wrote:
>>> 
>>> 
>>> Hi,
>>> 
>>> This may be more of a basic Java question, but I’ll ask it anyway because 
>>> it relates to “bouncing” and the handling of dynamic behavior.
>>> 
>>> In my @Activate method, I configure my component. Since the configuration 
>>> may be long-running (data is retrieved remotely), I use a Promise. But, the 
>>> component is available before it is actually “ready”. So far, this has not 
>>> been a problem.
>>> 
>>> It looks something like this:
>>> 
>>> @Reference private Store dataStore;
>>> 
>>> @Activate
>>> void activate() {
>>> configure(dataStore);
>>> }
>>> 
>>> void configure(Store withDataStore) {
>>> // Configuration is set up via a Promise, using a data store to retrieve 
>>> the data
>>> }
>>> 
>>> However, because there is some bouncing occurring, I think what is 
>>> happening is that configure() starts running in a different thread, but in 
>>> the meantime the reference to the dataStore is changed. The error log shows 
>>> that the data store is in an impossible state. After following a hunch, I 
>>> could confirm that the configureData process is running on a data store 
>>> service that was deactivated during bouncing.
>>> 
>>> What would be a good (and simple) strategy to handle this type of 
>>> long-running configuration, where the configuration is in a different 
>>> thread and depends on services that may come and go?
>>> 
>>> 
>>> Note: in the end, the component gets configured and the application runs, 
>>> but I would still like to be able to handle this situation properly.
>>> 
>>> 
>>> Thanks!
>>> =David
>>> 
>>> 
>>> ___
>>> OSGi Developer Mail List
>>> osgi-dev@mail.osgi.org
>>> https://mail.osgi.org/mailman/listinfo/osgi-dev
>> 
> 
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Dealing with bouncing

2018-07-22 Thread Peter Kriens via osgi-dev
In some cases (when the extra complexity was warranted) I let the component 
class act as a proxy to a delegate. I then get the delegate from a  Promise. So 
you just forward every method in your service interface to the delegate. There 
is a function in Eclipse that will create the delegation methods.

In general you want to afford this complexity and for example use a simple 
init() method that blocks until init is done. However, the delegate has some 
nice qualities if you switch more often than just at init.

Kind regards,

Peter Kriens

> On 22 Jul 2018, at 10:35, David Leangen via osgi-dev  
> wrote:
> 
> 
> Hi,
> 
> This may be more of a basic Java question, but I’ll ask it anyway because it 
> relates to “bouncing” and the handling of dynamic behavior.
> 
> In my @Activate method, I configure my component. Since the configuration may 
> be long-running (data is retrieved remotely), I use a Promise. But, the 
> component is available before it is actually “ready”. So far, this has not 
> been a problem.
> 
> It looks something like this:
> 
> @Reference private Store dataStore;
> 
> @Activate
> void activate() {
>  configure(dataStore);
> }
> 
> void configure(Store withDataStore) {
>  // Configuration is set up via a Promise, using a data store to retrieve the 
> data
> }
> 
> However, because there is some bouncing occurring, I think what is happening 
> is that configure() starts running in a different thread, but in the meantime 
> the reference to the dataStore is changed. The error log shows that the data 
> store is in an impossible state. After following a hunch, I could confirm 
> that the configureData process is running on a data store service that was 
> deactivated during bouncing.
> 
> What would be a good (and simple) strategy to handle this type of 
> long-running configuration, where the configuration is in a different thread 
> and depends on services that may come and go?
> 
> 
> Note: in the end, the component gets configured and the application runs, but 
> I would still like to be able to handle this situation properly.
> 
> 
> Thanks!
> =David
> 
> 
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Service binding order

2018-07-17 Thread Peter Kriens via osgi-dev
A really elegant solution to these problems is to use a Promise …

1) Create a Deferrred
2) Execute your item code through the promise of the deferred
3) When the Executor reference is set, you resolve the deferred


@Component
public class Foo {
Deferred  deferred = new Deferred<>();

@Reference
void setExecutor( Executor e) { deferred.resolve(e); }

@Reference( multiple/dynmaic) 
void addItem( Item item) {
deferred.getPromise().thenAccept ( executor -> … )
}
}

This will automatically process your items after the executor is set. It think 
it also easily extends to multiple dependencies but would have to puzzle a bit. 
If you’re unfamiliar with Promises, I’ve written an app note, ehh blog, 
recently about 1.1 Promises  http://aqute.biz/2018/06/28/Promises.html 
. They really shine in these 
ordering issues.

Kind regards,

Peter Kriens



> On 18 Jul 2018, at 00:16, David Leangen via osgi-dev  
> wrote:
> 
> 
> Hi!
> 
> I have a component that acts a bit like a whiteboard provider. It looks 
> something like this:
> 
> public class MyWhiteboard
> {
>  boolean isActive;
> 
>  @Reference MyExecutor executor; // Required service to execute on an Item
> 
>  @Reference(multiple/dynamic)
>  void bindItem( Item item )
>  {
>if (isActivated)
>  // add the Item
>else
>  // Store the item to be added once this component is activated
>  }
> 
>  void unbindItem( Item item )
>  {
>// Remove the item
>  }
> 
>  @Activate
>  void activate()
>  {
>// execute non-processed Items
>isActivate = true;
>  }
> }
> 
> The MyExecutor must be present before an Item can be processed, but there is 
> no guarantee as to the binding order. All I can think of doing is ensuring 
> that the Component is Activated before processing.
> 
> My question is: is there a more elegant / simpler / less error prone way of 
> accomplishing this?
> 
> 
> Thanks!
> =David
> 
> 
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Functions as configuration

2018-07-16 Thread Peter Kriens via osgi-dev
> On 16 Jul 2018, at 22:47, David Leangen via osgi-dev  
> wrote:
> Thanks Peter.
> Could you please expand on what you mean by this?
>> Notice that you can easily share (non managed service) configurations 
>> between components.
You can get the configuration from different PIDs in your component. The 
`configurationPid` method in the @Component annotation takes multiple 
arguments. I recall that the first is a factory pid, the others are normal 
managed service pids. (Only 1 pid can control the life cycle.)

This allows you to create shared configuration between different components.

You probably need to read up on the spec about the details …

Kind regards,

Peter Kriens

> Also, thanks a lot for this:
>> BTW, I updated the v2archive.osgi.enroute to build again. So the snapshots 
>> are also available now again. I tried to release this version but I did not 
>> have authority. Will try to get that.



> 
> 
> Cheers,
> =David
> 
> 
>> On Jul 16, 2018, at 16:07, Peter Kriens > > wrote:
>> 
>> Just be careful with these things. I call it the C++ effect. You get really 
>> thrilled about all the things you could do and how easy life is when you got 
>> that function. Then after a few years when you have it you see that the 
>> complexity went to your lambdas. I did that a lot in C++ from there the name 
>> :-)
>> 
>> In my experience you want your configuration to be simple and any processing 
>> should be done by the component. Notice that you can easily share (non 
>> managed service) configurations between components.
>> 
>> BTW, I updated the v2archive.osgi.enroute to build again. So the snapshots 
>> are also available now again. I tried to release this version but I did not 
>> have authority. Will try to get that.
>> 
>> Kind regards,
>> 
>>  Peter Kriens
>> 
>>> On 16 Jul 2018, at 03:09, David Leangen via osgi-dev 
>>> mailto:osgi-dev@mail.osgi.org>> wrote:
>>> 
>>> 
>>> Thanks, Peter. That could actually be useful in some cases.
>>> 
>>> I guess what I’m really looking for right now though is a way to inject a 
>>> lambda function.
>>> 
>>> I suppose what I could do it have a 2-step configuration: the first step 
>>> would have constant configuration values being read as per usual via the 
>>> Configurator, and my lambdas provided from a Java class (or perhaps parsed 
>>> from a configuration file if I can find a way to do that). Then for the 
>>> second step the two could be combined, put into a Dictionary, and the 
>>> actual service would be instantiated based on the combined constant+lambda 
>>> configuration.
>>> 
>>> I would ideally like to keep the constants and the lambdas together in a 
>>> configuration file, but maybe that is just not possible right now.
>>> 
>>> In any case, this sounds very frameworky to me, so I was hoping that 
>>> something like this already exists…
>>> 
>>> 
>>> Cheers,
>>> =David
>>> 
>>> 
>>> 
 On Jul 15, 2018, at 1:01, Peter Kriens >>> > wrote:
 
 The v2Archive OSGi enRoute has a Configurer that uses a subset of the bnd 
 Macro language. This supports ${system;..} and ${system_allow_fail}. These 
 take shell command lines.
 
 P
 
 
 
> On 14 Jul 2018, at 09:07, David Leangen via osgi-dev 
> mailto:osgi-dev@mail.osgi.org>> wrote:
> 
> 
> Thanks, BJ.
> 
> Yeah, right now I am using a Dictionary exactly how you mentioned, but I 
> am wondering if there is a way to maintain it the same way I do as for my 
> configurations.
> 
> Has there ever been a discussion about possibly including this type of 
> thing in the spec? For instance, a spec could include a script (saved in 
> a configuration file), and the script could be parsed and included in a 
> Configuration.
> 
> Has nobody ever encountered this use case? If you have, how did you solve 
> it?
> 
> 
> Cheers,
> =David
> 
> 
>> On Jul 14, 2018, at 5:04, BJ Hargrave > > wrote:
>> 
>> Component properties are basically service properties which are 
>> basically meant to be things that can go in a Configuration: 
>> https://osgi.org/specification/osgi.core/7.0.0/framework.module.html#i3217016
>>  
>> .
>>  Complex objects including objects implementing functional interfaces 
>> are not in scope for a Configuration.
>>  
>> That said, I imagine you could pass any value object in the Dictionary 
>> supplied to ComponentFactory.newInstance since they are not stored in 
>> Configuration Admin and SCR would not police the value object types :-)
>> --
>> 
>> BJ Hargrave
>> Senior Technical Staff Member, IBM // office: +1 386 848 1781
>> OSGi Fellow and CTO of the OSGi Alliance // mobile: +1 386 848 3788
>> hargr...@us.ibm.com 

Re: [osgi-dev] Functions as configuration

2018-07-16 Thread Peter Kriens via osgi-dev
Just be careful with these things. I call it the C++ effect. You get really 
thrilled about all the things you could do and how easy life is when you got 
that function. Then after a few years when you have it you see that the 
complexity went to your lambdas. I did that a lot in C++ from there the name :-)

In my experience you want your configuration to be simple and any processing 
should be done by the component. Notice that you can easily share (non managed 
service) configurations between components.

BTW, I updated the v2archive.osgi.enroute to build again. So the snapshots are 
also available now again. I tried to release this version but I did not have 
authority. Will try to get that.

Kind regards,

Peter Kriens

> On 16 Jul 2018, at 03:09, David Leangen via osgi-dev  
> wrote:
> 
> 
> Thanks, Peter. That could actually be useful in some cases.
> 
> I guess what I’m really looking for right now though is a way to inject a 
> lambda function.
> 
> I suppose what I could do it have a 2-step configuration: the first step 
> would have constant configuration values being read as per usual via the 
> Configurator, and my lambdas provided from a Java class (or perhaps parsed 
> from a configuration file if I can find a way to do that). Then for the 
> second step the two could be combined, put into a Dictionary, and the actual 
> service would be instantiated based on the combined constant+lambda 
> configuration.
> 
> I would ideally like to keep the constants and the lambdas together in a 
> configuration file, but maybe that is just not possible right now.
> 
> In any case, this sounds very frameworky to me, so I was hoping that 
> something like this already exists…
> 
> 
> Cheers,
> =David
> 
> 
> 
>> On Jul 15, 2018, at 1:01, Peter Kriens > > wrote:
>> 
>> The v2Archive OSGi enRoute has a Configurer that uses a subset of the bnd 
>> Macro language. This supports ${system;..} and ${system_allow_fail}. These 
>> take shell command lines.
>> 
>> P
>> 
>> 
>> 
>>> On 14 Jul 2018, at 09:07, David Leangen via osgi-dev 
>>> mailto:osgi-dev@mail.osgi.org>> wrote:
>>> 
>>> 
>>> Thanks, BJ.
>>> 
>>> Yeah, right now I am using a Dictionary exactly how you mentioned, but I am 
>>> wondering if there is a way to maintain it the same way I do as for my 
>>> configurations.
>>> 
>>> Has there ever been a discussion about possibly including this type of 
>>> thing in the spec? For instance, a spec could include a script (saved in a 
>>> configuration file), and the script could be parsed and included in a 
>>> Configuration.
>>> 
>>> Has nobody ever encountered this use case? If you have, how did you solve 
>>> it?
>>> 
>>> 
>>> Cheers,
>>> =David
>>> 
>>> 
 On Jul 14, 2018, at 5:04, BJ Hargrave >>> > wrote:
 
 Component properties are basically service properties which are basically 
 meant to be things that can go in a Configuration: 
 https://osgi.org/specification/osgi.core/7.0.0/framework.module.html#i3217016
  
 .
  Complex objects including objects implementing functional interfaces are 
 not in scope for a Configuration.
  
 That said, I imagine you could pass any value object in the Dictionary 
 supplied to ComponentFactory.newInstance since they are not stored in 
 Configuration Admin and SCR would not police the value object types :-)
 --
 
 BJ Hargrave
 Senior Technical Staff Member, IBM // office: +1 386 848 1781
 OSGi Fellow and CTO of the OSGi Alliance // mobile: +1 386 848 3788
 hargr...@us.ibm.com 
  
  
 - Original message -
 From: David Leangen via osgi-dev >>> >
 Sent by: osgi-dev-boun...@mail.osgi.org 
 
 To: David Leangen via osgi-dev >>> >
 Cc:
 Subject: [osgi-dev] Functions as configuration
 Date: Fri, Jul 13, 2018 3:32 PM
  
 Hi!
 
 Is there any way to include functions as part of a component configuration?
 
 
 Cheers,
 =David
 
 ___
 OSGi Developer Mail List
 osgi-dev@mail.osgi.org 
 https://mail.osgi.org/mailman/listinfo/osgi-dev 
 
  
  
 
>>> 
>>> ___
>>> OSGi Developer Mail List
>>> osgi-dev@mail.osgi.org 
>>> https://mail.osgi.org/mailman/listinfo/osgi-dev 
>>> 
> 
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer 

Re: [osgi-dev] Functions as configuration

2018-07-14 Thread Peter Kriens via osgi-dev
The v2Archive OSGi enRoute has a Configurer that uses a subset of the bnd Macro 
language. This supports ${system;..} and ${system_allow_fail}. These take shell 
command lines.

P



> On 14 Jul 2018, at 09:07, David Leangen via osgi-dev  
> wrote:
> 
> 
> Thanks, BJ.
> 
> Yeah, right now I am using a Dictionary exactly how you mentioned, but I am 
> wondering if there is a way to maintain it the same way I do as for my 
> configurations.
> 
> Has there ever been a discussion about possibly including this type of thing 
> in the spec? For instance, a spec could include a script (saved in a 
> configuration file), and the script could be parsed and included in a 
> Configuration.
> 
> Has nobody ever encountered this use case? If you have, how did you solve it?
> 
> 
> Cheers,
> =David
> 
> 
>> On Jul 14, 2018, at 5:04, BJ Hargrave > > wrote:
>> 
>> Component properties are basically service properties which are basically 
>> meant to be things that can go in a Configuration: 
>> https://osgi.org/specification/osgi.core/7.0.0/framework.module.html#i3217016
>>  
>> .
>>  Complex objects including objects implementing functional interfaces are 
>> not in scope for a Configuration.
>>  
>> That said, I imagine you could pass any value object in the Dictionary 
>> supplied to ComponentFactory.newInstance since they are not stored in 
>> Configuration Admin and SCR would not police the value object types :-)
>> --
>> 
>> BJ Hargrave
>> Senior Technical Staff Member, IBM // office: +1 386 848 1781
>> OSGi Fellow and CTO of the OSGi Alliance // mobile: +1 386 848 3788
>> hargr...@us.ibm.com 
>>  
>>  
>> - Original message -
>> From: David Leangen via osgi-dev > >
>> Sent by: osgi-dev-boun...@mail.osgi.org 
>> 
>> To: David Leangen via osgi-dev > >
>> Cc:
>> Subject: [osgi-dev] Functions as configuration
>> Date: Fri, Jul 13, 2018 3:32 PM
>>  
>> Hi!
>> 
>> Is there any way to include functions as part of a component configuration?
>> 
>> 
>> Cheers,
>> =David
>> 
>> ___
>> OSGi Developer Mail List
>> osgi-dev@mail.osgi.org 
>> https://mail.osgi.org/mailman/listinfo/osgi-dev 
>> 
>>  
>>  
>> 
> 
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Double config

2018-07-12 Thread Peter Kriens via osgi-dev
 assuming that 
>> it is not a Configurator problem.
>>  
>> From the rest of your email it is clear that the configured component 
>> provides a service. This will make it lazy by default. You have also stated 
>> that you’re using has a required configuration policy. Therefore it will 
>> only be activated when:
>>  
>> There is a configuration present
>> All of the mandatory services are satisfied
>> Someone is actually using the service
>>  
>> The component will then be deactivated when any of these things are 
>> no-longer true, and so this means that your component may be being bounced 
>> for several reasons, only one of which is to do with configuration. 
>>  
>> While it seems unlikely to me, you seem to be fairly convinced that the 
>> configuration is at fault so lets start there.
>>  
>> Does the generated XML file in your bundle actually say that the 
>> configuration is required. This is the configuration used by DS (not the 
>> annotations) at runtime. It’s unlikely that this has gone wrong, but it’s an 
>> easy check
>> Do you have more than one way of putting configuration into the runtime? If 
>> you are also using File Install, or some other configuration management 
>> agent, then it’s entirely possible that you’re seeing a configuration update 
>> occurring.
>> Do you have multiple configuration bundles which both contain a 
>> configuration for this component? The configurator will process these one at 
>> a time, and it will result in configuration bouncing
>> Is it possible that something is forcing a re-resolve of your configuration 
>> bundle or the configurator? This could easily trigger the configurations to 
>> be reprocessed.
>>  
>> Now in my view the most likely reason for this behaviour is that the 
>> configured component isnot being bounced due to a configuration change. The 
>> most likely suspect is that the component is simply not being used at that 
>> time, and so it is being disposed (DS lazy behaviour). This could easily 
>> happen if one of the dependent services that you mention starts using your 
>> component, and then is bounced (by a configuration update or whatever) which 
>> causes your component to be released. If nobody else is using your component 
>> at the time then it will be deactivated and released. The easiest way to 
>> verify this is to make your component immediate. This will remove the 
>> laziness, and you will get a good idea as to whether the bounce is caused by 
>> things that you depend on, or by things that depend on you. If making your 
>> component immediate removes the “problem” then it proves that this isn’t a 
>> problem at all (and you can then remove the immediate behaviour again).
>>  
>> If making your component immediate doesn’t stop the bouncing then the third 
>> set of things to check is the list of services that your component depends 
>> on. Is it possible that one of them is being bounced due to a configuration 
>> update, or perhaps one of their service dependencies being 
>> unregistered/re-registered? As I mentioned before, bouncing of DS components 
>> is simply the way that updates propagate through the system when services 
>> use a static policy. It isn’t inherently a bad thing, but if you want to 
>> avoid it you have to be dynamic all the way down the dependency graph. 
>> Usually this is a lot more effort than it’s worth!
>>  
>> I hope this helps,
>>  
>> Tim
>>  
>> 
>> 
>> On 12 Jul 2018, at 08:39, Peter Kriens via osgi-dev > <mailto:osgi-dev@mail.osgi.org>> wrote:
>>  
>> This is a gift! :-) It means your code is not handling the dynamics 
>> correctly and now you know it! 
>> 
>> The cause is that that the DS starts the components before the Configurator 
>> has done its work. The easiest solution seems to be to use start levels. If 
>> your code CAN handle the dynamics, then this is one of the few legitimate 
>> places where startlevels are useful. I usually oppose it because people do 
>> not handle the dynamics correctly and want a short cut. This is fine until 
>> it is not. And the ‘not’ happens guaranteed one day. So first fix the 
>> dynamics, and then think of solutions that improve the experience.
>> 
>> For this purpose, enRoute Classic had a 
>> ‘osgi.enroute.configurer.api.ConfigurationDone’ service. If you made an 
>> @Reference to ConfigurationDone then you were guaranteed to not start before 
>> the Configurer had done its magic. Since you did not want to depend on such 
>> a specific service for rea

Re: [osgi-dev] Double config

2018-07-12 Thread Peter Kriens via osgi-dev
This is a gift! :-) It means your code is not handling the dynamics correctly 
and now you know it! 

The cause is that that the DS starts the components before the Configurator has 
done its work. The easiest solution seems to be to use start levels. If your 
code CAN handle the dynamics, then this is one of the few legitimate places 
where startlevels are useful. I usually oppose it because people do not handle 
the dynamics correctly and want a short cut. This is fine until it is not. And 
the ‘not’ happens guaranteed one day. So first fix the dynamics, and then think 
of solutions that improve the experience.

For this purpose, enRoute Classic had a 
‘osgi.enroute.configurer.api.ConfigurationDone’ service. If you made an 
@Reference to ConfigurationDone then you were guaranteed to not start before 
the Configurer had done its magic. Since you did not want to depend on such a 
specific service for reasons of cohesion, I developed AggregateState. One of 
the aggregated states was then the presence of the ConfigurationDone service. 
Although this is also not perfectly cohesive it at least aggregates all the 
uncohesive things in one place and it is configurable.

Although this works the customer still is not completely happy since also the 
Aggregate State feels uncohesive. So we’ve been discussing a new angle. I want 
to try to make the Configuration Records _transient_. In Felix Config Admin you 
can provide a persistence service (and it was recently made useful). I was 
thinking of setting a special property in the configuration (something like 
‘:persistence:transient’). The persistence layer would then _not_ persist it. 
I.e. after a restart there would be no configuration until the Configurer sets 
it. This will (I expect) seriously diminish the bouncing caused for these kind 
of components.

And if you’re asking why I am still on the enRoute classic Configurer. Well, it 
has ‘precious’ fields and they solved a nasty problem. We needed to use a well 
defined value but if the user set one of those values, we wanted to keep the 
user’s value. Quite a common scenario. With `precious` fields you rely on 
default values (so no value for a precious field in the configurer’s input) but 
copy the previous value to the newer configuration, if present. Works quite 
well.

I think `transient` and `precious` could be nice extensions to the new 
Configurator for R8.

Hope this helps. Kind regards,

Peter Kriens

> On 12 Jul 2018, at 00:48, David Leangen via osgi-dev  
> wrote:
> 
> 
> Hi!
> 
> A question about component configuration.
> 
> I have a component that has a required configuration policy. Using a (pre R7) 
> Configurator to configure the component. For some reason, the Component gets 
> activated, deactivated, then activated again, which is not desirable.
> 
> Questions:
> 
> 1. How can I figure out why this is happening. I have tried many approaches, 
> but can’t seem to get a clue as to why this is happening. Since it generally 
> doesn’t seem to happen for other configured components, I am assuming that it 
> is not a Configurator problem.
> 
> 2. Is there a way to prohibit this from happening?
> 
> 
> In the meantime, I will make the dependent services more dynamic so they are 
> not thrown off by this change, but their behavior is actually correct: the 
> expectation is that the configured service should only get instantiated once, 
> so a static @Reference is correct.
> 
> 
> Thanks!
> =David
> 
> 
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] OSGi Specification Question

2018-06-29 Thread Peter Kriens via osgi-dev
LOL. One of the first ‘memory tricks’ I had was that it looked like a butler 
serving a tray. I.e. the publisher was ‘offering’ the service to the world.

Strange that it is still hard to remember :-)

Kind regards,

Peter Kriens

> On 29 Jun 2018, at 12:53, Fauth Dirk (AA-AS/EIS2-EU) via osgi-dev 
>  wrote:
> 
> Another way to look at the picture and remind about the triangle direction 
> would be to see it as a megaphone. The provider shouts out to the “world” 
> that there is a new service available. J
>  
> Mit freundlichen Grüßen / Best regards 
> 
> Dirk Fauth
> 
> Automotive Service Solutions, ESI application (AA-AS/EIS2-EU) 
> Robert Bosch GmbH | Postfach 11 29 | 73201 Plochingen | GERMANY | 
> www.bosch.com <http://www.bosch.com/> 
> Tel. +49 7153 666-1155 | dirk.fa...@de.bosch.com 
> <mailto:dirk.fa...@de.bosch.com> 
> 
> Sitz: Stuttgart, Registergericht: Amtsgericht Stuttgart, HRB 14000;
> Aufsichtsratsvorsitzender: Franz Fehrenbach; Geschäftsführung: Dr. Volkmar 
> Denner,
> Prof. Dr. Stefan Asenkerschbaumer, Dr. Rolf Bulander, Dr. Stefan Hartung, Dr. 
> Markus Heyn, Dr. Dirk Hoheisel,
> Christoph Kübel, Uwe Raschke, Peter Tyroller 
> 
> 
> Von: osgi-dev-boun...@mail.osgi.org <mailto:osgi-dev-boun...@mail.osgi.org> 
> [mailto:osgi-dev-boun...@mail.osgi.org 
> <mailto:osgi-dev-boun...@mail.osgi.org>] Im Auftrag von Peter Kriens via 
> osgi-dev
> Gesendet: Donnerstag, 28. Juni 2018 17:45
> An: Dirk Fauth mailto:dirk.fa...@gmail.com>>
> Cc: OSGi Developer Mail List  <mailto:osgi-dev@mail.osgi.org>>
> Betreff: Re: [osgi-dev] OSGi Specification Question
>  
> I think this is the same confusion that exists for the UML interface symbol. 
>  
> Lots of problems that have a client-publisher relation have a hard time with 
> a good symbol since the relation is symmetric but not really.
>  
> Imho once you take a bit of time to see that the arrow points in the 
> dependency direction you tend to never forget it.
>  
> I’d love to change it for another symbol but never found a better one. UML 
> interfaces are not services (and probably even more confusing) and I’ve never 
> so far seen a symbol for micro-services, where I guess they have the same 
> need for a symbol. Most symbols tend to draw something on the publisher.
>  
> 
> However, in OSGi that does not make sense since we have independent 
> publisher. In OSGi, the service is its own entity. Nobody else but OSGi seem 
> to make that distinction. We reified the service and the service object(s) 
> because they are independent of the provider and the consumer. Our dependency 
> versioning is based on the version of the API, NOT the provider nor the 
> consymer. (At the time I tried to get the Semver people to understand that 
> they should add support for the compatibility rule differences between 
> providers and consumers and failed.)
>  
> The service broker model in OSGi is very innovative but unfortunately badly 
> understood since it is so outlandish. Ah well, story of my life.
>  
> Kind regards,
>  
> Peter Kriens
>  
>  
>  
>  
>  
>  
> 
> 
> On 28 Jun 2018, at 16:56, Dirk Fauth  <mailto:dirk.fa...@gmail.com>> wrote:
>  
> Thanks a lot for the answers. Then I updated my slides last year correctly 
> after the feedback from Tim. I just didn't remember. :) 
>  
> The confusion seems to be quite big. I need to update my getting started with 
> DS tutorial. And the incorrect picture is also posted on the Concierge 
> website https://www.eclipse.org/concierge/ 
> <https://www.eclipse.org/concierge/>
>  
>  
>  
> Peter Kriens via osgi-dev  <mailto:osgi-dev@mail.osgi.org>> schrieb am Do., 28. Juni 2018, 16:23:
> Not sure it is a good idea to repeat this picture for future confusion on a 
> mailing list?
>  
> Peter Kriens
>  
> 
> 
> On 28 Jun 2018, at 16:10, Tim Ward via osgi-dev  <mailto:osgi-dev@mail.osgi.org>> wrote:
>  
> I think it is this picture that causes the confusion:
>  
> 
>  
>  
> In this picture the “register” action is between A and S. This appears to 
> suggest that the service S is registered by bundle A. If that is the case 
> then the pointy-end of the triangle needs to point at A. Similarly the “get” 
> and “listen” actions are coming from bundle B, which would appear to make it 
> the consumer of S. The consumer should have the fat end of the triangle.
>  
> Note that almost all OSGi diagrams put the consumer on the left and the 
> provider on the right.
>  
> Best Regards,
>  
> Tim
> 
> 
> On 28 Jun 2018, at 15:04, Neil Bartlett via osgi-dev  <mailto:osgi-dev@mail.osgi.org>> wro

Re: [osgi-dev] OSGi Specification Question

2018-06-28 Thread Peter Kriens via osgi-dev
I think this is the same confusion that exists for the UML interface symbol. 

Lots of problems that have a client-publisher relation have a hard time with a 
good symbol since the relation is symmetric but not really.

Imho once you take a bit of time to see that the arrow points in the dependency 
direction you tend to never forget it.

I’d love to change it for another symbol but never found a better one. UML 
interfaces are not services (and probably even more confusing) and I’ve never 
so far seen a symbol for micro-services, where I guess they have the same need 
for a symbol. Most symbols tend to draw something on the publisher.


However, in OSGi that does not make sense since we have independent publisher. 
In OSGi, the service is its own entity. Nobody else but OSGi seem to make that 
distinction. We reified the service and the service object(s) because they are 
independent of the provider and the consumer. Our dependency versioning is 
based on the version of the API, NOT the provider nor the consymer. (At the 
time I tried to get the Semver people to understand that they should add 
support for the compatibility rule differences between providers and consumers 
and failed.)

The service broker model in OSGi is very innovative but unfortunately badly 
understood since it is so outlandish. Ah well, story of my life.

Kind regards,

Peter Kriens







> On 28 Jun 2018, at 16:56, Dirk Fauth  wrote:
> 
> Thanks a lot for the answers. Then I updated my slides last year correctly 
> after the feedback from Tim. I just didn't remember. :) 
> 
> The confusion seems to be quite big. I need to update my getting started with 
> DS tutorial. And the incorrect picture is also posted on the Concierge 
> website https://www.eclipse.org/concierge/ 
> <https://www.eclipse.org/concierge/>
> 
> 
> 
> Peter Kriens via osgi-dev  <mailto:osgi-dev@mail.osgi.org>> schrieb am Do., 28. Juni 2018, 16:23:
> Not sure it is a good idea to repeat this picture for future confusion on a 
> mailing list?
> 
>   Peter Kriens
> 
> 
>> On 28 Jun 2018, at 16:10, Tim Ward via osgi-dev > <mailto:osgi-dev@mail.osgi.org>> wrote:
>> 
>> I think it is this picture that causes the confusion:
>> 
>>  <https://jaxenter.de/wp-content/uploads/2016/05/enRoute1.png> 
>> <https://jaxenter.de/wp-content/uploads/2016/05/enRoute1.png>
>> 
>> 
>> In this picture the “register” action is between A and S. This appears to 
>> suggest that the service S is registered by bundle A. If that is the case 
>> then the pointy-end of the triangle needs to point at A. Similarly the “get” 
>> and “listen” actions are coming from bundle B, which would appear to make it 
>> the consumer of S. The consumer should have the fat end of the triangle.
>> 
>> Note that almost all OSGi diagrams put the consumer on the left and the 
>> provider on the right.
>> 
>> Best Regards,
>> 
>> Tim
>> 
>>> On 28 Jun 2018, at 15:04, Neil Bartlett via osgi-dev 
>>> mailto:osgi-dev@mail.osgi.org>> wrote:
>>> 
>>> The spec is correct, and either Tim misspoke or you misheard him.
>>> 
>>> The service should look like a big arrow pointing from the consumer to the 
>>> provider.
>>> 
>>> Neil
>>> 
>>> 
>>> 
>>> On Thu, Jun 28, 2018 at 2:57 PM, Fauth Dirk (AA-AS/EIS2-EU) via osgi-dev 
>>> mailto:osgi-dev@mail.osgi.org>> wrote:
>>> Hi,
>>> 
>>>  
>>> 
>>> maybe a stupid question, but I am preparing my slides for the Java Forum 
>>> Stuttgart about Remote Services, and remembered that Tim told me that my 
>>> diagrams are incorrect, as the triangle is directing into the wrong 
>>> direction.
>>> 
>>>  
>>> 
>>> The big end should be on the producer side, while the cone end points to 
>>> the consumer bundle.
>>> 
>>> https://enrouteclassic.github.io/doc/215-sos.html 
>>> <https://enrouteclassic.github.io/doc/215-sos.html>
>>> https://jaxenter.de/osgi-enroute-1-0-hintergruende-architektur-best-practices-39709
>>>  
>>> <https://jaxenter.de/osgi-enroute-1-0-hintergruende-architektur-best-practices-39709>
>>>  
>>> 
>>> The architecture picture in the Remote Services chapter show the triangles 
>>> differently.
>>> 
>>> https://osgi.org/specification/osgi.cmpn/7.0.0/service.remoteservices.html 
>>> <https://osgi.org/specification/osgi.cmpn/7.0.0/service.remoteservices.html>
>>>  
>>> 
>>> Where is my misunderstanding? Is the picture incorrect, or does

Re: [osgi-dev] OSGi Specification Question

2018-06-28 Thread Peter Kriens via osgi-dev
Not sure it is a good idea to repeat this picture for future confusion on a 
mailing list?

Peter Kriens


> On 28 Jun 2018, at 16:10, Tim Ward via osgi-dev  
> wrote:
> 
> I think it is this picture that causes the confusion:
> 
>   
> 
> 
> 
> In this picture the “register” action is between A and S. This appears to 
> suggest that the service S is registered by bundle A. If that is the case 
> then the pointy-end of the triangle needs to point at A. Similarly the “get” 
> and “listen” actions are coming from bundle B, which would appear to make it 
> the consumer of S. The consumer should have the fat end of the triangle.
> 
> Note that almost all OSGi diagrams put the consumer on the left and the 
> provider on the right.
> 
> Best Regards,
> 
> Tim
> 
>> On 28 Jun 2018, at 15:04, Neil Bartlett via osgi-dev > > wrote:
>> 
>> The spec is correct, and either Tim misspoke or you misheard him.
>> 
>> The service should look like a big arrow pointing from the consumer to the 
>> provider.
>> 
>> Neil
>> 
>> 
>> 
>> On Thu, Jun 28, 2018 at 2:57 PM, Fauth Dirk (AA-AS/EIS2-EU) via osgi-dev 
>> mailto:osgi-dev@mail.osgi.org>> wrote:
>> Hi,
>> 
>>  
>> 
>> maybe a stupid question, but I am preparing my slides for the Java Forum 
>> Stuttgart about Remote Services, and remembered that Tim told me that my 
>> diagrams are incorrect, as the triangle is directing into the wrong 
>> direction.
>> 
>>  
>> 
>> The big end should be on the producer side, while the cone end points to the 
>> consumer bundle.
>> 
>> https://enrouteclassic.github.io/doc/215-sos.html 
>> 
>> https://jaxenter.de/osgi-enroute-1-0-hintergruende-architektur-best-practices-39709
>>  
>> 
>>  
>> 
>> The architecture picture in the Remote Services chapter show the triangles 
>> differently.
>> 
>> https://osgi.org/specification/osgi.cmpn/7.0.0/service.remoteservices.html 
>> 
>>  
>> 
>> Where is my misunderstanding? Is the picture incorrect, or does the picture 
>> show something different?
>> 
>>  
>> 
>> Mit freundlichen Grüßen / Best regards 
>> 
>> Dirk Fauth
>> 
>> Automotive Service Solutions, ESI application (AA-AS/EIS2-EU) 
>> Robert Bosch GmbH | Postfach 11 29 | 73201 Plochingen | GERMANY | 
>> www.bosch.com  
>> Tel. +49 7153 666-1155 | dirk.fa...@de.bosch.com 
>>  
>> 
>> Sitz: Stuttgart, Registergericht: Amtsgericht Stuttgart, HRB 14000;
>> Aufsichtsratsvorsitzender: Franz Fehrenbach; Geschäftsführung: Dr. Volkmar 
>> Denner,
>> Prof. Dr. Stefan Asenkerschbaumer, Dr. Rolf Bulander, Dr. Stefan Hartung, 
>> Dr. Markus Heyn, Dr. Dirk Hoheisel,
>> Christoph Kübel, Uwe Raschke, Peter Tyroller 
>> 
>> 
>> 
>> 
>> ___
>> OSGi Developer Mail List
>> osgi-dev@mail.osgi.org 
>> https://mail.osgi.org/mailman/listinfo/osgi-dev 
>> 
>> 
>> ___
>> OSGi Developer Mail List
>> osgi-dev@mail.osgi.org 
>> https://mail.osgi.org/mailman/listinfo/osgi-dev
> 
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

[osgi-dev] enRoute classic

2018-05-21 Thread Peter Kriens via osgi-dev
I got requests from several people what happened with the OSGi enRoute articles 
and tutorials. OSGi seems to have broken all the links. I do think it contained 
some valuable material, especially for bndtools users, so I reconstructed the 
web site as it was:

https://enrouteclassic.github.io/ 

I would prefer to move the articles and tutorials to the bndtools side when I 
find time.

Kind regards,

Peter Kriens



___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Bndtools 3.3 enRoute distro update Jetty

2018-05-15 Thread Peter Kriens via osgi-dev
That is a pretty harsh thing to do, gratuitously renaming this file?

Peter Kriens

> On 15 May 2018, at 10:43, Tim Ward via osgi-dev  
> wrote:
> 
> The correct link is 
> https://github.com/osgi/osgi.enroute/blob/deprecated/osgi.enroute.pom.distro/distro-pom.xml
>  
> 
> 
> Sent from my iPhone
> 
> On 15 May 2018, at 09:14, Paul F Fraser via osgi-dev  > wrote:
> 
>> This does not work for me and the link to the distro pom is dead.
>> Is it still possible to alter the distro now that we have the new enRoute?
>> If so how would I do this as I am tied to bndtools 3.3 for now?
>> The latest jetty bundle is now at 4.0.0
>> 
>> Paul Fraser
>> 
>> On 1/12/2017 8:18 PM, Peter Kriens wrote:
>>> You can add the newer Jetty in your pom.xml in cnf/central.xml in your 
>>> workspace. Need to touch build.bnd so Eclipse rebuilds. The latest distro 
>>> pom on Github is actually 3.2 and should drag in the proper Jetty 
>>> transitively? 
>>> (https://github.com/osgi/osgi.enroute/blob/master/osgi.enroute.pom.distro/distro-pom.xml
>>>  
>>> )
>>> 
>>> Kind regards,
>>> 
>>> Peter Kriens
>>> 
 On 1 Dec 2017, at 07:32, Paul F Fraser via osgi-dev 
 > wrote:
 
 Hi,
 
 The jetty bundles in the current distro (the one I have)  are version 3.1 
 which imports jetty 9.2.10.
 
 The latest felix bundle is 3.4.6 which imports jetty 9.3.9.
 
 What needs to be done to upgrade the distro or how can I force the 
 resolver to ignore the old version and work from a version in the local 
 repo?
 
 Paul Fraser
>>> 
>> 
>> ___
>> OSGi Developer Mail List
>> osgi-dev@mail.osgi.org 
>> https://mail.osgi.org/mailman/listinfo/osgi-dev 
>> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Competing module systems

2018-04-14 Thread Peter Kriens via osgi-dev
Very nice example of something I see too often: choosing a bad solution that 
just appears to be  ’simpler’ to use. I think the Java world suffers from this 
terribly.

We also had a big discussions about circularity in the OSGi for the DTO’s. 
Initially they were circular but in the we agreed to not make them have inner 
references. It is slightly more work for the receiver but it makes life so much 
simpler overall …

Kind regards,

Peter Kriens


> On 14 Apr 2018, at 05:35, Peter via osgi-dev  wrote:
> 
> On 13/04/2018 6:32 PM, Neil Bartlett via osgi-dev wrote:
>> 
>> 
>> On Thu, Apr 12, 2018 at 10:12 PM, Mark Raynsford via osgi-dev 
>>  
>> >> wrote:
>> 
>>On 2018-04-12T20:32:13 +0200
>>Peter Kriens 
>>>> wrote:
>> 
>>> Caught between a rock and a hard place with only one way forward …
>> 
>>I should make the point that I don't hate the JPMS. I do think that
>>it's just barely the minimum viable product, though.
>> 
>>The JVM really did need a module system, both for the maintenance of
>>the JDK itself and the future features that the system enables.
>> 
>>> Oracle’s strategy is a mystery to me.
>> 
>>I think their strategy is fairly explicable, but I think they did make
>>some mistakes with some of the specifics (filename-based
>>automodules!).
>>There's a pattern that Oracle tend to follow: They solicit
>>opinions from
>>everyone vigorously, and then they implement the smallest possible
>>subset such that the fewest possible people are pissed off by it. If
>>there's a possibility of doing something wrong, nothing is done
>>instead.
>> 
>> 
>> While I've seen that principle operate at other times (remember how 
>> controversial erasure was in Java 5?), I'm not sure it's worked that way in 
>> the JPMS case. In fact JPMS does far more than it needed to.
>> 
>> The key feature of JPMS that could not be achieved before, even with 
>> ClassLoaders, was strictly enforced isolation via the accessibility 
>> mechanism, as opposed to the visibility mechanism that is employed by OSGi. 
>> That strict isolation was needed primarily to allow Oracle to close off JVM 
>> internals from application code and thereby prevent a whole class of 
>> security vulnerabilities. Remember that Oracle was being absolutely 
>> slaughtered in the press around 2011-12 over the insecurity of Java, and 
>> most corporates uninstalled it from user desktops.
> 
> Java deserialization vulnerabilties.
> 
> Ironically, Java serialization was an exception, rather than a minimalist 
> approach, it was given advanced, if not excessive functionality, including 
> the ability to serialize circular object graphs.
> 
> Circular relationships generally tend to be difficult to manage.
> 
> Due to the support for circular object graphs, it wasn't possible to to use a 
> serialization constructor, so all invariant checks had to be made after 
> construction, when it was too late.   Making matters worse, an attacker can 
> create any serializable object they want, and because of the way deserialized 
> objects are created, child class domains aren't on the call stack during 
> super class deserialization.   An attacker can take advantage of the circular 
> object graph support, and caching to obtain a reference to any object in a 
> deserialized graph.
> 
> In essence, they needed to have an alternative locked down implementation of 
> serialization.
> 
> There's nothing wrong with the java serialization protocol.   I wrote a 
> hardened implementation of java serialization, refactored from Apache 
> Harmony's implementation, implementing classes use a serialization 
> constructor, that ensures an object cannot be created unless it's invariants 
> were satisfied, this includes the ability to check inter class invariants as 
> well.   It doesn't support circular object graphs and has limits on how much 
> data could be cached, limits on array size etc.
> 
> I submitted the api to the OpenJDK development mail list, there was interest 
> there, but they decided they needed to support circular object graphs.
> 
> In the end Oracle decided to use white listing.
> 
> Cheers,
> 
> Peter.
> 
>> 
>> But they could have achieved this with a thin API, comparable to 
>> ProtectionDomain. If they had done that then OSGi (and other module systems 
>> like JBoss) could have chosen to leverage the API to enforce strict 
>> separation between OSGi bundles.
>> 
>> But they didn't do that. Instead they implemented a whole new, incompatible 
>> module system with its own metadata format, including changes to the Java 
>> language spec. Then they restricted the ability to apply strict isolation to 
>> artifacts that are JPMS modules. 

Re: [osgi-dev] Go Package versioning support proposal

2018-04-13 Thread Peter Kriens via osgi-dev
I am extremely interested in go and an OSGi like thing for it … 

Oracle’s stewardship is becoming painful.

Kind regards,

Peter Kriens




> On 13 Apr 2018, at 13:22, Balázs Zsoldos via osgi-dev 
>  wrote:
> 
> Hi,
> 
> Sorry if too offtopic.
> 
> I have just found the Proposal for Package Versioning in Go 
> . I only had a quick look so 
> far, but I have the feeling that this is a first step to support a module 
> system natively in Go that is very similar to OSGi (even more similar to OSGi 
> than JPMS).
> 
> Seeing what is happening with Java JPMS, this is the first time I have got 
> interested in another language in the last 14 years. It would make sense to 
> share experience between OSGi experts and GoLang implementors.
> 
> Regards,
> -- 
> Balázs Zsoldos
> 
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Competing module systems

2018-04-12 Thread Peter Kriens via osgi-dev
Caught between a rock and a hard place with only one way forward …

Oracle’s strategy is a mystery to me.

Kind regards,

Peter Kriens

> On 12 Apr 2018, at 20:06, Mark Raynsford via osgi-dev 
>  wrote:
> 
> Thought this might be of mild interest to people on this list:
> 
> https://blog.io7m.com/2018/04/12/competing-module-systems.xhtml
> 
> -- 
> Mark Raynsford | http://www.io7m.com
> 
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Api compile-only and remote services

2018-02-09 Thread Peter Kriens via osgi-dev
Remote OSGi is a good example, but then then API can be viewed the provider.

Kind regards,

Peter Kriens

> On 9 Feb 2018, at 01:12, Scott Lewis via osgi-dev <osgi-dev@mail.osgi.org> 
> wrote:
> 
> I think cases where the service consumer doesn't need or want the provider 
> code (e.g. small or embedded environments) is one such example.   In such 
> cases a separate API can keep things smaller and less complicated on the 
> consumer.
> 
> Scott
> 
> On 2/8/2018 12:54 PM, Peter Kriens via osgi-dev wrote:
>> I think there are only a few good cases where you really need an API bundle. 
>> Virtually cases I’ve seen in the wild were incorrect because they fudged the 
>> versions to make the API more backward compatible than it really was so more 
>> providers could leverage it. In very few cases can you have different 
>> providers for the same API version.
>> 
>> Exporting the API from the provider is usually the safest way imho and it 
>> works very well without extra metadata with resolving.
>> 
>> Kind regards,
>> 
>>  Peter Kriens
>> 
>> 
>>> On 8 Feb 2018, at 21:01, Chris Gray via osgi-dev <osgi-dev@mail.osgi.org> 
>>> wrote:
>>> 
>>> In my experience it is quite often handy to have a separate API bundle -
>>> yours is one use case, another is where the system may run on different
>>> platforms which require different implementations for some services (cloud
>>> vs development machine, embedded vs dev, self-contained demo vs real
>>> distributed system, ...). In fact in a way this is the "normal"
>>> configuration, provider-exports is a convenience.
>>> 
>>> If the packages can be exported by both api bundle and provider then you
>>> have to watch out for "uses" constraint violations if versions diverge,
>>> Keep It Simple is the key here.
>>> 
>>> Have fun
>>> 
>>> Chris
>>> 
>>>> I understand that with api compile only is not possible to run a remote
>>>> instance without the related provider bundle.
>>>> 
>>>> With a provider with conditional package I have to resolve both api and
>>>> provider bundle because api packages are only copied inside provider but
>>>> not exported.
>>>> So I don’t think that is better than having api and provider separated.
>>>> 
>>>> Now I’ve tried to modify my bundles in this way:
>>>> - api without compile only that export packages
>>>> - provider like before that export api packages
>>>> 
>>>> With this configuration I can:
>>>> - run an osgi instance resolving only provider bundle like before
>>>> - run another instance without the provider resolving only api bundle
>>>> 
>>>> Daniele
>>>> 
>>>> 
>>>> Da: David Daniel
>>>> Inviato: sabato 3 febbraio 2018 21:24
>>>> A: Dominik Przybysz; OSGi Developer Mail List
>>>> Cc: Daniele Pirola
>>>> Oggetto: Re: [osgi-dev] Api compile-only and remote services
>>>> 
>>>> I think different projects handle it differently.  The way I do it is if
>>>> only one provider loaded will implement the interface then I include the
>>>> interface in the provider with a conditional package and leave the api
>>>> compile only
>>>> http://enroute.osgi.org/tutorial_wrap/212-conditional-package.html  If
>>>> multiple providers are going to implement the interface then I will change
>>>> the api bundle to a regular bundle to inclusion.
>>>> David
>>>> 
>>>> On Sat, Feb 3, 2018 at 2:35 PM, Dominik Przybysz via osgi-dev
>>>> <osgi-dev@mail.osgi.org> wrote:
>>>> Hi,
>>>> if you know that you may run your bundles in distributed environment and
>>>> want to use Remote Services, your API bundles must be normal bundles.
>>>> 
>>>> 2018-02-03 19:12 GMT+01:00 Daniele Pirola via osgi-dev
>>>> <osgi-dev@mail.osgi.org>:
>>>> Hi,
>>>> I have an Osgi workspace with many bundles with different "types": api,
>>>> provider and application. I follow enroute tutorials and my api bundles
>>>> are "compile only" and providers export api packages.
>>>> Now I would like to use osgi remote services but how can I use api
>>>> packages in different osgi instances without importing also the provider
>>>> that export these packages? I hav

Re: [osgi-dev] Api compile-only and remote services

2018-02-08 Thread Peter Kriens via osgi-dev
I think there are only a few good cases where you really need an API bundle. 
Virtually cases I’ve seen in the wild were incorrect because they fudged the 
versions to make the API more backward compatible than it really was so more 
providers could leverage it. In very few cases can you have different providers 
for the same API version.

Exporting the API from the provider is usually the safest way imho and it works 
very well without extra metadata with resolving. 

Kind regards,

Peter Kriens


> On 8 Feb 2018, at 21:01, Chris Gray via osgi-dev  
> wrote:
> 
> In my experience it is quite often handy to have a separate API bundle -
> yours is one use case, another is where the system may run on different
> platforms which require different implementations for some services (cloud
> vs development machine, embedded vs dev, self-contained demo vs real
> distributed system, ...). In fact in a way this is the "normal"
> configuration, provider-exports is a convenience.
> 
> If the packages can be exported by both api bundle and provider then you
> have to watch out for "uses" constraint violations if versions diverge,
> Keep It Simple is the key here.
> 
> Have fun
> 
> Chris
> 
>> I understand that with api compile only is not possible to run a remote
>> instance without the related provider bundle.
>> 
>> With a provider with conditional package I have to resolve both api and
>> provider bundle because api packages are only copied inside provider but
>> not exported.
>> So I don’t think that is better than having api and provider separated.
>> 
>> Now I’ve tried to modify my bundles in this way:
>> - api without compile only that export packages
>> - provider like before that export api packages
>> 
>> With this configuration I can:
>> - run an osgi instance resolving only provider bundle like before
>> - run another instance without the provider resolving only api bundle
>> 
>> Daniele
>> 
>> 
>> Da: David Daniel
>> Inviato: sabato 3 febbraio 2018 21:24
>> A: Dominik Przybysz; OSGi Developer Mail List
>> Cc: Daniele Pirola
>> Oggetto: Re: [osgi-dev] Api compile-only and remote services
>> 
>> I think different projects handle it differently.  The way I do it is if
>> only one provider loaded will implement the interface then I include the
>> interface in the provider with a conditional package and leave the api
>> compile only 
>> http://enroute.osgi.org/tutorial_wrap/212-conditional-package.html  If
>> multiple providers are going to implement the interface then I will change
>> the api bundle to a regular bundle to inclusion.
>> David
>> 
>> On Sat, Feb 3, 2018 at 2:35 PM, Dominik Przybysz via osgi-dev
>>  wrote:
>> Hi,
>> if you know that you may run your bundles in distributed environment and
>> want to use Remote Services, your API bundles must be normal bundles.
>> 
>> 2018-02-03 19:12 GMT+01:00 Daniele Pirola via osgi-dev
>> :
>> Hi,
>> I have an Osgi workspace with many bundles with different "types": api,
>> provider and application. I follow enroute tutorials and my api bundles
>> are "compile only" and providers export api packages. 
>> Now I would like to use osgi remote services but how can I use api
>> packages in different osgi instances without importing also the provider
>> that export these packages? I have to build another api project that only
>> export packages? Or api "compile only" is not the right thing for remote?
>> 
>> Kind regards
>> Daniele
>> 
>> 
>> 
>> ___
>> OSGi Developer Mail List
>> osgi-dev@mail.osgi.org
>> https://mail.osgi.org/mailman/listinfo/osgi-dev
>> 
>> 
>> 
>> 
>> --
>> Pozdrawiam / Regards,
>> Dominik Przybysz
>> 
>> ___
>> OSGi Developer Mail List
>> osgi-dev@mail.osgi.org
>> https://mail.osgi.org/mailman/listinfo/osgi-dev
>> 
>> 
>> 
> Mail priva di virus. www.avg.com
>> 
>> ___
>> OSGi Developer Mail List
>> osgi-dev@mail.osgi.org
>> https://mail.osgi.org/mailman/listinfo/osgi-dev
> 
> 
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] DS Reference injection order

2018-02-02 Thread Peter Kriens via osgi-dev
In this case you do not have such guarantee about injection. And it won’t be 
necessary since you could just make them mandatory references. 

You need to distinguish two things:

1) The existence of or the state of the system
2) Injection

If you have an optional dependence then all bets are off because there can 
always be a delay between the events. Only mandatory references give you any 
guarantee here. For example, to handle cyclic dependencies DS can delay 
injection of optional dependencies although the service is present. 

What you could do is to lookup the services through the component context, they 
should be there.

So what you can observe through the AggregateState service is not by definition 
observable through optional services. A bit like relativity theory.

Kind regards,

Peter Kriens




> On 2 Feb 2018, at 10:35, Thomas Driessen via osgi-dev 
>  wrote:
> 
> Hi,
> 
> thanks for all those answers and sorry for my late response.
> 
> Maybe I have been a little bit too imprecise.
> The reason I asked, was that I found this article by Peter Kriens about an 
> AggregateState: http://aqute.biz/2017/04/24/aggregate-state.html 
> 
> 
> I wrote such a service and only was wondering afterwards if this really works 
> all the times or if I just hit a lucky punch and it worked the few times I 
> tested it. Or if I got Peter's intentions completely wrong and did something 
> AggregateState was not meant for.
> 
> So what I do is writing a component as described in the article like this:
> 
> @Component
> public class MyComp{
> 
> @Reference(target="(&(x.prop=x)(y.prop=y))")
> private AggregateState state;
> 
> @Reference(target="(x.prop=x)")
> private volatile X x;
> 
> @Reference(target="(y.prop=y)")
> private volatile Y y;
> 
> @Activate
> private void activate(){
> doSomethingWithXandY();
> }
> 
> }
> 
> So even if my AggregateState is satisfied, which means that there are 
> fitting, activated services for X and Y, can I rest assured that X and Y are 
> injected BEFORE activate() is called? Because my understanding was that X and 
> Y might still be null and MyComp is nevertheless eligible for activation. 
> 
> Kind regards,
> Thomas
> 
> 
> 
> -- Originalnachricht --
> Von: "BJ Hargrave" >
> An: tim.w...@paremus.com ; 
> osgi-dev@mail.osgi.org 
> Cc: osgi-dev@mail.osgi.org ; 
> thomas.driessen...@gmail.com 
> Gesendet: 31.01.2018 15:40:53
> Betreff: Re: [osgi-dev] DS Reference injection order
> 
>> Tim correctly cites the spec regarding the order SCR must process the 
>>  elements in the XML. Since you are using annotations, there is 
>> one more thing to know. The javadoc for the Reference annotation states:
>>  
>> In the generated Component Description for a component, the references must 
>> be ordered in ascending lexicographical order (using String.compareTo ) of 
>> the reference names. 
>> 
>>  
>> So you can control the order of the reference elements in the xml by the 
>> names of the references.
>>  
>> The order of processing reference can be useful if you are using method 
>> injection and a method needs to use a previously injected reference. But 
>> since you examples show field injection, your component cannot not really 
>> observe the order of injection.
>>  
>> With 'volatile' you have a dynamic reference which can be updated at any 
>> time, so there is no order with respect to other reference.
>>  
>> --
>> 
>> BJ Hargrave
>> Senior Technical Staff Member, IBM // office: +1 386 848 1781
>> OSGi Fellow and CTO of the OSGi Alliance // mobile: +1 386 848 3788
>> hargr...@us.ibm.com 
>>  
>>  
>> - Original message -
>> From: Tim Ward via osgi-dev > >
>> Sent by: osgi-dev-boun...@mail.osgi.org 
>> 
>> To: Thomas Driessen > >, OSGi Developer Mail List 
>> >
>> Cc:
>> Subject: Re: [osgi-dev] DS Reference injection order
>> Date: Wed, Jan 31, 2018 6:43 AM
>>  
>> Firstly - why do you need to rely on this? It sounds like very fragile code 
>> to me and you should probably consider rewriting so that you don’t need to 
>> care. However...
>>  
>> Section 112.5.7 of the compendium says that:
>> 
>> When binding services, the references are processed in the order in which 
>> they are specified in the component description. That is, target services 
>> from the first specified reference are bound before services from the next 
>> specified reference.
>>  
>> A static optional service will not be set if it is not satisfied, 

Re: [osgi-dev] making an existing interface method default causes MINOR baseline change

2017-12-05 Thread Peter Kriens via osgi-dev
Great minds think alike (and it helped we were both in this discussion) :-)

P

> On 5 Dec 2017, at 09:03, Timothy Ward via osgi-dev  
> wrote:
> 
> Ray - I assume that you’re asking why this is a MINOR change, rather than a 
> MICRO change? It’s obviously not a major change because the method exists 
> with the same signature everywhere both before and after.
> 
> The reason that it’s a MINOR change is to do with the forward (rather than 
> backward) guarantees that the semantic versioning rules must make.
> 
> In your example you end up deleting the original doFoo() implementation from 
> the Bar class. From this point on the Bar class has no knowledge of this 
> method, and the implementation *must* come from either a super type (there 
> aren’t any) or as a default method on the implemented interface. If this 
> doesn’t happen then the whole type hierarchy of Bar is broken - the concrete 
> types which subclass Bar simply don’t have an implementation of the interface 
> method that the contract says they must have!
> 
> The only way to enforce this is to ensure that the updated Bar class imports 
> a version of Foo which is guaranteed to have the “default doFoo() feature”. 
> In semantic versioning new features always require at least a MINOR bump so 
> that people can reliably depend on them (depending on a MICRO is not a good 
> idea). That is what is happening here.
> 
> Tim
> 
> PS - I have just seen Peter’s email come in, which thankfully agrees with 
> what I’m saying!
> 
>> On 5 Dec 2017, at 06:43, Fauth Dirk (AA-AS/EIS2-EU) via osgi-dev 
>> > wrote:
>> 
>> Hi,
>>  
>> IMHO it is a MINOR change because it is not a breaking change. J
>>  
>> With that change neither implementations of the Foo interface, nor classes 
>> that extend the abstract Bar class will break.
>>  
>> Implementations of the Foo interface can still implement the doFoo() method 
>> and by doing this override the default behavior. Overriding a default is not 
>> a breaking change as you neither add a new public method or field, you just 
>> give a default implementation.
>>  
>> Classes that extend Bar did not need to implement doFoo() before, as it was 
>> implemented in Bar. Removing that method would be typically a breaking 
>> change. But you are moving it as default method to the Foo interface. 
>> Therefore Bar still has the doFoo() method implemented, as it is provided by 
>> the Foo interface.
>>  
>> I have to admit that I am not 100% sure about the byte code in the end and 
>> if that matters. But as a user of the interface and abstract class, nothing 
>> breaks. 
>>  
>> Mit freundlichen Grüßen / Best regards 
>> 
>> Dirk Fauth
>> 
>> Automotive Service Solutions, ESI application (AA-AS/EIS2-EU) 
>> Robert Bosch GmbH | Postfach 11 29 | 73201 Plochingen | GERMANY | 
>> www.bosch.com  
>> Tel. +49 7153 666-1155 | dirk.fa...@de.bosch.com 
>>  
>> 
>> Sitz: Stuttgart, Registergericht: Amtsgericht Stuttgart, HRB 14000;
>> Aufsichtsratsvorsitzender: Franz Fehrenbach; Geschäftsführung: Dr. Volkmar 
>> Denner,
>> Prof. Dr. Stefan Asenkerschbaumer, Dr. Rolf Bulander, Dr. Stefan Hartung, 
>> Dr. Markus Heyn, Dr. Dirk Hoheisel,
>> Christoph Kübel, Uwe Raschke, Peter Tyroller 
>> 
>> 
>> Von: osgi-dev-boun...@mail.osgi.org  
>> [mailto:osgi-dev-boun...@mail.osgi.org 
>> ] Im Auftrag von Raymond Auge via 
>> osgi-dev
>> Gesendet: Dienstag, 5. Dezember 2017 00:26
>> An: OSGi Developer Mail List > >
>> Betreff: [osgi-dev] making an existing interface method default causes MINOR 
>> baseline change
>>  
>> Hey All,
>> 
>> I think the answer is "Yes it's a MINOR change", but I wanted to clarify.
>>  
>> Assume I have the following interface in an exported package:
>>  
>> public interface Foo {
>>public void doFoo();
>> }
>>  
>> And in the same package I have abstract class Bar which implements Foo:
>>  
>> public abstract class Bar implements Foo {
>>public void doFoo() {...}
>>public abstract void doBar();
>> }
>>  
>> And I want to migrate to a default method because doFoo() logic rarely 
>> changes:
>>  
>> public interface Foo {
>>public default void doFoo() {...}
>> }
>>  
>> public abstract class Bar implements Foo {
>>//public void doFoo() {...}
>>public abstract void doBar();
>> }
>>  
>> Can someone explain why this is a MINOR change?
>>  
>>  
>> -- 
>> Raymond Augé  (@rotty3000)
>> Senior Software Architect Liferay, Inc.  (@Liferay)
>> Board Member & EEG Co-Chair, OSGi Alliance  (@OSGiAlliance)
>> ___
>> OSGi Developer Mail List
>> osgi-dev@mail.osgi.org 
>> 

Re: [osgi-dev] making an existing interface method default causes MINOR baseline change

2017-12-04 Thread Peter Kriens via osgi-dev
You’re right. Since the impact is so low we did consider making this a MICRO 
change. 

However, in bnd we remove the micro qualifier to prevent the static transitive 
graph that Maven requires. I.e. the MICRO is the ‘oil’ between the modules, it 
may not signal incompatibility. However, this implies that a MICRO change is 
not only backward compatible but also forward compatible. Turning an interface 
method into a default method is, however, not forward compatible. A bundle 
compiled against the later MICRO version does require that version to be 
present in runtime, unlike a bug fix or documentation fix but does not require 
this micro version because the MICRO is stripped.

Kind regards,

Peter Kriens







> On 5 Dec 2017, at 07:43, Fauth Dirk (AA-AS/EIS2-EU) via osgi-dev 
>  wrote:
> 
> Hi,
>  
> IMHO it is a MINOR change because it is not a breaking change. J
>  
> With that change neither implementations of the Foo interface, nor classes 
> that extend the abstract Bar class will break.
>  
> Implementations of the Foo interface can still implement the doFoo() method 
> and by doing this override the default behavior. Overriding a default is not 
> a breaking change as you neither add a new public method or field, you just 
> give a default implementation.
>  
> Classes that extend Bar did not need to implement doFoo() before, as it was 
> implemented in Bar. Removing that method would be typically a breaking 
> change. But you are moving it as default method to the Foo interface. 
> Therefore Bar still has the doFoo() method implemented, as it is provided by 
> the Foo interface.
>  
> I have to admit that I am not 100% sure about the byte code in the end and if 
> that matters. But as a user of the interface and abstract class, nothing 
> breaks. 
>  
> Mit freundlichen Grüßen / Best regards 
> 
> Dirk Fauth
> 
> Automotive Service Solutions, ESI application (AA-AS/EIS2-EU) 
> Robert Bosch GmbH | Postfach 11 29 | 73201 Plochingen | GERMANY | 
> www.bosch.com  
> Tel. +49 7153 666-1155 | dirk.fa...@de.bosch.com 
>  
> 
> Sitz: Stuttgart, Registergericht: Amtsgericht Stuttgart, HRB 14000;
> Aufsichtsratsvorsitzender: Franz Fehrenbach; Geschäftsführung: Dr. Volkmar 
> Denner,
> Prof. Dr. Stefan Asenkerschbaumer, Dr. Rolf Bulander, Dr. Stefan Hartung, Dr. 
> Markus Heyn, Dr. Dirk Hoheisel,
> Christoph Kübel, Uwe Raschke, Peter Tyroller 
> 
> 
> Von: osgi-dev-boun...@mail.osgi.org [mailto:osgi-dev-boun...@mail.osgi.org] 
> Im Auftrag von Raymond Auge via osgi-dev
> Gesendet: Dienstag, 5. Dezember 2017 00:26
> An: OSGi Developer Mail List 
> Betreff: [osgi-dev] making an existing interface method default causes MINOR 
> baseline change
>  
> Hey All,
> 
> I think the answer is "Yes it's a MINOR change", but I wanted to clarify.
>  
> Assume I have the following interface in an exported package:
>  
> public interface Foo {
>public void doFoo();
> }
>  
> And in the same package I have abstract class Bar which implements Foo:
>  
> public abstract class Bar implements Foo {
>public void doFoo() {...}
>public abstract void doBar();
> }
>  
> And I want to migrate to a default method because doFoo() logic rarely 
> changes:
>  
> public interface Foo {
>public default void doFoo() {...}
> }
>  
> public abstract class Bar implements Foo {
>//public void doFoo() {...}
>public abstract void doBar();
> }
>  
> Can someone explain why this is a MINOR change?
>  
>  
> -- 
> Raymond Augé  (@rotty3000)
> Senior Software Architect Liferay, Inc.  (@Liferay)
> Board Member & EEG Co-Chair, OSGi Alliance  (@OSGiAlliance)
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Bndtools 3.3 enRoute distro update Jetty

2017-12-01 Thread Peter Kriens via osgi-dev
You can add the newer Jetty in your pom.xml in cnf/central.xml in your 
workspace. Need to touch build.bnd so Eclipse rebuilds. The latest distro pom 
on Github is actually 3.2 and should drag in the proper Jetty transitively? 
(https://github.com/osgi/osgi.enroute/blob/master/osgi.enroute.pom.distro/distro-pom.xml
 
)

Kind regards,

Peter Kriens








> On 1 Dec 2017, at 07:32, Paul F Fraser via osgi-dev  
> wrote:
> 
> Hi,
> 
> The jetty bundles in the current distro (the one I have)  are version 3.1 
> which imports jetty 9.2.10.
> 
> The latest felix bundle is 3.4.6 which imports jetty 9.3.9.
> 
> What needs to be done to upgrade the distro or how can I force the resolver 
> to ignore the old version and work from a version in the local repo?
> 
> Paul Fraser
> 
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Upcoming Configurator Specification

2017-11-20 Thread Peter Kriens via osgi-dev
For a customer I extended the OSGi enRoute Configurer to do something like 
this. I allowed you to indicate that certain properties were ‘precious'. (See 
https://github.com/osgi/osgi.enroute/blob/310c16182abab915907b40803aa8e3377d7ae8ed/osgi.enroute.configurer.simple.provider/src/osgi/enroute/configurer/simple/provider/Configurer.java#L331-L349)

Kind regards,

Peter Kriens

> On 19 Nov 2017, at 21:12, elias vasylenko via osgi-dev 
>  wrote:
> 
> Brilliant thanks for the response, I will absolutely file this in the issue 
> tracker. I didn't expect it would be reasonable to slip into R7 at this point 
> if it wasn't already on the cards behind the scenes so I understand.
> 
> There are reasonable philosophical arguments against it ... but it's worth 
> considering.
> 
> Regards,
> 
> Eli
> 
> On Sun, 19 Nov 2017 at 09:27 Carsten Ziegeler  > wrote:
> Hi,
> 
> the Configurator specification (as well as the Configuration Admin one)
> does not support merging of configurations.
> 
> Unfortunately, a use case of merging has never been brought up to the
> expert group during the discussions and therefore we didn't consider
> this at all. That said, I think it would be great if you could create an
> issue in the public OSGi issue tracker for this. This will make it
> easier to consider for the expert group and it doesn't get lost.
> 
> I think it's too late for the R7 release as we're almost done with all
> the specification work and are currently finalizing the release. But
> it's something we can consider for R8.
> 
> Regards
> 
> Carsten
> 
> 
> Osgi Developer Mail List wrote
> > I have some thoughts/questions about the upcoming configurator
> > specification in OSGi R7
> >  > >.
> >
> > The spec talks about the concept of a configuration ranking, by which
> > the configurator service selects the most appropriate configuration to
> > apply. But what if we already have a working configuration deployed and
> > only wish to override certain values? Can we do something like the
> > following?
> >
> > [existing-baseline-configuration.json]
> > {
> > "my.pid": {
> > ":configurator:ranking": "0",
> > "key_one": "configured_value",
> > "key_two": "configured_value",
> > "key_three": "configured_value"
> > }
> > }
> >
> > [new-partial-configuration.json]
> > {
> > "my.pid": {
> > ":configurator:ranking": "1",
> > "key_three": "overriding_value"
> > }
> > }
> >
> > Normally in this scenario I assume [new-partial-configuration.json] will
> > be selected as the one to apply and the other will be completely
> > ignored. But I would hope there is a way to tell the configurator we
> > want to merge a configuration with the next-highest-ranked
> > configuration, such that the configurator will ultimately resolve
> > something like the following:
> >
> > [effective configuration]
> > {
> > "my.pid": {
> > "key_one": "configured_value",
> > "key_two": "configured_value",
> > "key_three": "overriding_value"
> > }
> > }
> >
> > I realise this behaviour isn't /always/ what we'd want, but perhaps we
> > could apply some PID-level configurator key to
> > [new-partial-configuration.json] to opt in, e.g.
> > "/:configurator:override-strategy": "complete/partial/whatever"/
> >
> > Since something like this doesn't appear to be mentioned in the latest
> > draft I assume it's not currently planned, but I'm hoping I've
> > misunderstood or misread some part of it because this seems like an easy
> > win to me. Perhaps there has been internal discussion of something like
> > this and it was dismissed for some reason? I realise it adds some
> > complexity to the implementation but I don't think it really adds any
> > confusion to the mental model for users.
> >
> > Any further comment or discussion would be welcome,
> >
> > Cheers,
> >
> > Eli
> >
> >
> > ___
> > OSGi Developer Mail List
> > osgi-dev@mail.osgi.org 
> > https://mail.osgi.org/mailman/listinfo/osgi-dev 
> > 
> >
> --
> Carsten Ziegeler
> Adobe Research Switzerland
> cziege...@apache.org 
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev