Re: [osgi-dev] Compendium services

2020-08-16 Thread Tim Ward via osgi-dev
Yes, it’s a requirement of the OSGi specification process that there be a 
reference implementation and test suite for every specification chapter. 

Note that the reference implementation may not always be open source (although 
it is very rare that it isn’t) and isn’t guaranteed to be particularly fast or 
scalable. It will, however, definitely pass the compliance tests. 

Sent from my iPhone

> On 14 Aug 2020, at 23:14, Leschke, Scott via osgi-dev 
>  wrote:
> 
> I’m thinking this must be yes but do all compendium services have a reference 
> implementation?
>  
> Scott
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev
___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] How to: File Upload with JAX-RS Whiteboard

2020-07-08 Thread Tim Ward via osgi-dev
I’m glad that I was able to help and blog posts are always welcome!

All the best,

Tim

> On 7 Jul 2020, at 13:10, Fauth Dirk (CAP-SST/ESM1) via osgi-dev 
>  wrote:
> 
> Hi Tim,
>  
> Thanks a lot! I would have never come to that solution by myself! Once 
> everything is setup I will write a blog post on that so more people can 
> benefit from this. (and also use it as my “external memory”) J
>  
> For people that are interested, here the simple solution.
>  
> 1. I used the enRoute Maven archetypes, therefore Jetty is my HTTP 
> container.
> 2. As my service should be as minimal as possible, I decided to register 
> a custom whiteboard application in favor of using the ConfigAdmin.
>  
> @Component(service=Application.class)
> @JaxrsApplicationBase("app4mc")
> @JaxrsName("app4mcMigration")
> @HttpWhiteboardServletMultipart(enabled = true)
> public class MigrationApplication extends Application {
>   
> }
>  
> 3. Then I register my JAX-RS resource against that application and use 
> the HttpServletRequest as suggested.
>  
> @Component(service=Migration.class)
> @JaxrsResource
> @JaxrsApplicationSelect("(osgi.jaxrs.name=app4mcMigration)")
> public class Migration {
>  
> @Path("converter")
> @POST
> @Consumes(MediaType.MULTIPART_FORM_DATA)
> @Produces(MediaType.TEXT_HTML)
> public Response upload(@Context HttpServletRequest request) throws 
> IOException, ServletException {
>   
>Collection parts = request.getParts();
>  
> String filename = "";
> for(Part part : parts) {
> filename += part.getSubmittedFileName();
> System.out.printf("File %s, %s, %d%n", filename,
> part.getContentType(), part.getSize());
>
> }
>   
> return Response.ok("File uploaded = " + filename).build();
> }
> }
>  
>  
>  
> Mit freundlichen Grüßen / Best regards
> 
> Dirk Fauth 
> 
> Cross Automotive Platforms - Systems, Software and Tools, (CAP-SST/ESM1)
> Robert Bosch GmbH | Postfach 30 02 40 | 70442 Stuttgart | GERMANY | 
> www.bosch.com 
> Tel. +49 7153 666-1155 | dirk.fa...@de.bosch.com 
> 
> 
> Sitz: Stuttgart, Registergericht: Amtsgericht Stuttgart, HRB 14000;
> Aufsichtsratsvorsitzender: Franz Fehrenbach; Geschäftsführung: Dr. Volkmar 
> Denner, 
> Prof. Dr. Stefan Asenkerschbaumer, Dr. Michael Bolle, Dr. Christian Fischer, 
> Dr. Stefan Hartung,
> Dr. Markus Heyn, Harald Kröger, Christoph Kübel, Rolf Najork, Uwe Raschke, 
> Peter Tyroller 
> ​
> Von: Tim Ward  
> Gesendet: Dienstag, 7. Juli 2020 12:37
> An: Fauth Dirk (CAP-SST/ESM1) ; OSGi Developer Mail 
> List 
> Betreff: Re: [osgi-dev] How to: File Upload with JAX-RS Whiteboard
>  
> Hi Dirk,
>  
> You’re absolutely correct in your statements that:
>  
> * I know that file upload is more complicated that it looks in first place.
> * Aries is implemented using CXF, and a mixture of CXF and Jersey seems to be 
> not possible.
>  
> In fact the Aries JAX-RS whiteboard completely hides the underlying CXF 
> implementation, so unless you start attaching fragments to the Aries bundle 
> you can’t plug in CXF things either. You can only work with the JAX-RS 
> standard.
>  
> So to strip things back:
>  
> Using Multipart file upload requires support from the underlying servlet 
> container - does the HTTP whiteboard implementation you’re using underneath 
> Aries JAX-RS support Multipart?
> If Multipart is supported by your HTTP container then is it enabled? There 
> are several relevant service properties described at 
> https://docs.osgi.org/specification/osgi.cmpn/7.0.0/service.http.whiteboard.html#d0e120961
>  
> 
> If you want to use Multipart inside a JAX-RS resource then you need to make 
> sure that your whiteboard application sets the multipart service properties 
> (so that the servlet hosting your application has them too). If you’re 
> registering the application yourself then that’s a simple matter of putting 
> an annotation on it (or however you’re doing your service properties). If 
> you’re using the default application then you will need to configure your 
> JAX-RS whiteboard using Config Admin and the 
> org.apache.aries.jax.rs.whiteboard.default pid (see 
> https://github.com/apache/aries-jax-rs-whiteboard#configuration 
> )
> To use the 

Re: [osgi-dev] Regarding Transaction Control

2020-04-07 Thread Tim Ward via osgi-dev
Hi Scott,



If you’re using JDBC you don’t really care about JPA. Therefore you should 
compile against the OSGi jar, but not worry about deploying it at runtime. 
Instead you can just deploy the Aries Transaction Control implementation (which 
includes a substitutable export of the Transaction Control API). I would 
suggest that you want to start by using tx-control-service-local 

 and tx-control-provider-jdbc-local 

 at runtime.




Now, the JPA dependency. 

The JPA (much like the servlet container) specification versioning policy is 
not done very well. Marketing versions have been used for the APIs despite the 
fact that backward compatibility has been maintained between versions. This 
breaks the cardinal rule of semantic versioning, which is that changes to the 
major version indicate breaking changes.

In any event, there is a part of the transaction control specification (in this 
case the JPA resource provider) which provides EntityManager instances, 
coupling it to the JPA API (there is a similar coupling for the JDBC resource 
provider and javax.sql). The reason that the version range starts at 1.0 is not 
because the specification is old, but because all versions (back to 1.0) are 
supported. Unfortunately where JPA APIs have been versioned using their 
marketing versions (2.0, 2.1, 2.2) this means that they don’t match a version 
range of "[1,2)”.

The general solution to this is to use the “osgi.contract” namespace 

 there is a contract name defined for JPA 
. This however needs 
API bundles that provide the contract...

Please get in touch if you have more questions,

All the best,

Tim

> On 7 Apr 2020, at 05:57, Markus Rathgeb via osgi-dev  
> wrote:
> 
> As I am using OSGi for a lot of projects on a pure Maven build with
> the help of all the bnd maven plugins.
> For easily testing I used bndrun files.
> 
> I don't want to create the set of bundles that are allowed to be used
> at runtime all the time and so I started to create pom files that
> contains all of them (one for bundles that are used at compile time
> and one for bundles that are potentially used at runtime).
> Using that approach my bnd application can depend on that runtime pom
> using scope "runtime" and could consume the specified artifacts to
> resolve its requirements.
> 
> The runtime pom contains also the bundles that has been necessary for
> me to use transaction control, hibernate, ...
> If you want to have a look at:
> https://github.com/maggu2810/osgideps/blob/master/runtime/pom.xml
> 
> As you can see I used the following persistence API bundle:
>org.apache.aries.jpa.javax.persistence
>javax.persistence_2.1
>2.7.0
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Intermittent failure to getService

2020-03-02 Thread Tim Ward via osgi-dev
Hi Alain,

Is it possible that someone has a reference to a BaValidationManagerExt service 
instance that they aren’t releasing after ungetting it (or that they’re holding 
onto after it has been unregistered)? It might be an SCR bug, but it’s more 
likely to be some code holding onto a component instance that it shouldn’t.

Best Regards,

Tim

> On 29 Feb 2020, at 13:29, Alain Picard via osgi-dev  
> wrote:
> 
> Hi
> 
> I am having a very intermittent issue with getService on a prototype 
> component. This is called hundreds of times and I put a breakpoint a few 
> weeks ago and have now gotten the error.
> 
> I have this class:
> @Component(scope=ServiceScope.PROTOTYPE,
> property= org.osgi.framework.Constants.SERVICE_RANKING + ":Integer=10"
> )
> public final class BaValidationManagerExt implements ValidationManagerExt {
> private final Logger log = LoggerFactory.getLogger(getClass());
> 
> @Reference(scope = ReferenceScope.PROTOTYPE_REQUIRED)
> private ComponentServiceObjects validatorFactory;
> 
> @Activate
> private void activate() {
> log.trace("Activating {}/{}", getClass(), System.identityHashCode(this)); 
> //$NON-NLS-1$
> }
> 
> @Deactivate
> private void deactivate() {
> log.trace("Deactivating {}/{}", getClass(), System.identityHashCode(this)); 
> //$NON-NLS-1$
> }
> 
> @Override
> public Diagnostic getDiagnosticForEObject(EObject eObj) {
> log.trace("Getting diagnostic for {}", eObj); //$NON-NLS-1$
> Validator validator = validatorFactory.getService();
> 
> if (validator != null) {
> try {
> return validator.runValidation(false, Collections.singletonMap(eObj, new 
> HashSet<>()),
> new NullProgressMonitor()).getB();
> }
> finally {
> validatorFactory.ungetService(validator);
> }
> }
> else {
> log.error("Validator Service not found for {}", eObj, new Throwable()); 
> //$NON-NLS-1$
> return Diagnostic.CANCEL_INSTANCE;
> }
> }
> }
> 
> and the validator:
> @Component(
> scope = ServiceScope.PROTOTYPE,
> property= org.osgi.framework.Constants.SERVICE_RANKING + ":Integer=10"
> )
> public final class BaValidator implements Validator {
> private final Logger log = LoggerFactory.getLogger(getClass());
> 
> private Map> elementsToValidate;
> private Set validated = Sets.newHashSet();
> private boolean batch;
> 
> private EditingDomain domain;
> private AdapterFactory adapterFactory;
> 
> @Reference
> private volatile List validationProviders;  //NOSONAR as 
> per OSGi 112.3.9.1 
> 
> @Reference
> private ValidationUtils validationUtils;
> 
> @Activate
> private void activate() {
> log.trace("Activating {}/{}", getClass(), System.identityHashCode(this)); 
> //$NON-NLS-1$
> }
> 
> @Deactivate
> private void deactivate() {
> log.trace("Deactivating {}/{}", getClass(), System.identityHashCode(this)); 
> //$NON-NLS-1$
> }
> ...
> } 
> 
> The error is on the highlighted line, which happens since getService returns 
> null.
> 
> As can be seen here, ValidatorFactory serviceObjects is null which seems to 
> be what makes it return null:
> ComponentServiceObjectsImpl [instances=[], serviceObjects=null, 
> deactivated=false, hashCode=301166435]
> 
> I am not seeing any special in the logs (tracing is on). Just before I see a 
> number of successful call to the same code with the last one being:
> just before in the logs:
> 08:00:45.854 [Worker-1: Create Diagram] TRACE c.c.i.v.b.p.BaValidator - 
> Activating class 
> com.castortech.iris.validation.ba.provider.BaValidator/1297753057
> 08:00:45.857 [Worker-1: Create Diagram] TRACE c.c.i.v.b.p.BaValidator - 
> Notify 4 listeners with diagnostics ([Diagnostic OK 
> source=com.castortech.iris.ba.validation code=0  
> data=[RadialDiagramImpl{[cdoID->6558b1f9-dbcf-4e9d-b7b8-73779b5ada8f]
> 08:00:45.858 [Worker-1: Create Diagram] TRACE c.c.i.v.b.p.BaValidator - 
> Deactivating class 
> com.castortech.iris.validation.ba.provider.BaValidator/1297753057
> 
> 
> Has anyone seen this before or can provide some pointers to address and/or 
> debug this.
> 
> Thanks
> Alain
> 
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] [FDC-External]: Re: DB2 Enroute OSGi Connectivity

2020-01-14 Thread Tim Ward via osgi-dev
Hi Kevin,

I’m afraid that I don’t know the answer to that, as I don’t know what license 
you’re using to consume the DB2 driver, and even if you told me I am not a 
lawyer. 

All the best,

Tim

> On 10 Jan 2020, at 14:55, Matthews, Kevin  wrote:
> 
> Thanks Tim. Sorry to respond so late... I will look into the PAX-JDBC. Does 
> wrapping the DB2 Jar is still supported?
>  
> From: osgi-dev-boun...@mail.osgi.org  On 
> Behalf Of Tim Ward via osgi-dev
> Sent: Monday, January 6, 2020 4:04 AM
> To: Matthews, Kevin ; OSGi Developer Mail List 
> 
> Subject: [FDC-External]: Re: [osgi-dev] DB2 Enroute OSGi Connectivity
>  
> Hi Kevin,
>  
> Unfortunately you are correct that DB2 doesn’t provide an OSGi-enabled 
> version of its driver, however it is possible to make it work (with a little 
> effort).
>  
> Implementing the JDBC Service specification yourself is pretty trivial (all 
> you need to do is to register a single service implementing up to four 
> methods), and would allow you to embed the DB2 driver in the bundle that you 
> write. 
>  
> There is also a JDBC service implementation at PAX-JDBC 
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__ops4j1.jira.com_wiki_spaces_PAXJDBC_pages_104103940_DB2-2Bdriver-2Badapter=DwMFaQ=ewHkv9vLloTwhsKn5d4bTdoqsmBfyfooQX5O7EQLv5TtBZ1CwcvjU063xndfqI8U=wKHHp1xupEN1UoR2CPDlg9US2Vs3om5ld5YqSsWMQX8=S513DSbbTVeI1oTfyBzCAl9XxIzlSovqUwCuGBPsJNs=gi-gYZqL412X06V73jNPaPIMwId-25gF5DEajr37cHo=>,
>  however this still requires you to wrap the DB2 Jar into an OSGi bundle.
>  
> I hope this helps,
>  
> Tim
> 
> 
> On 2 Jan 2020, at 13:37, Matthews, Kevin via osgi-dev  <mailto:osgi-dev@mail.osgi.org>> wrote:
>  
> Hello, we are looking to modularizing our legacy application using OSGI to 
> connect to DB2 database . Has anyone successfully use OSGI enroute to connect 
> to a DB2 database? I have used  PostgreSQL and MySQL which already comes with 
> OSGI bundle manifest but it seems DB2 doesn’t have the bundle manifest.
>  
> Thanks,
>  
> Kevin Matthews
> Sr Application Analyst – Rapid Connect Development
> Global Business Solutions
> Office: 954-845-4222 | Mobile: 561-465-6694
> kevin.matth...@fiserv.com <mailto:kevin.matth...@fiserv.com>
>  
> 
>  
> Fiserv 
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.fiserv.com_=DwMFaQ=ewHkv9vLloTwhsKn5d4bTdoqsmBfyfooQX5O7EQLv5TtBZ1CwcvjU063xndfqI8U=wKHHp1xupEN1UoR2CPDlg9US2Vs3om5ld5YqSsWMQX8=S513DSbbTVeI1oTfyBzCAl9XxIzlSovqUwCuGBPsJNs=04NDXFFbgzS5Lvu77wJRPQh4nYhQBZkg3I00mdqfYXE=>
>  | Join Our Team 
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.careers.fiserv.com_=DwMFaQ=ewHkv9vLloTwhsKn5d4bTdoqsmBfyfooQX5O7EQLv5TtBZ1CwcvjU063xndfqI8U=wKHHp1xupEN1UoR2CPDlg9US2Vs3om5ld5YqSsWMQX8=S513DSbbTVeI1oTfyBzCAl9XxIzlSovqUwCuGBPsJNs=zewc4dQ69Q4F-p4QGi0wTRhm6Jq51W7HRFXXbphnjf4=>
>  | Twitter 
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__twitter.com_fiserv_=DwMFaQ=ewHkv9vLloTwhsKn5d4bTdoqsmBfyfooQX5O7EQLv5TtBZ1CwcvjU063xndfqI8U=wKHHp1xupEN1UoR2CPDlg9US2Vs3om5ld5YqSsWMQX8=S513DSbbTVeI1oTfyBzCAl9XxIzlSovqUwCuGBPsJNs=wWCr_rLEQpVVewTEbOl213OfwenL2IPgp25BoIEdGXY=>
>  | LinkedIn 
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.linkedin.com_company_fiserv_=DwMFaQ=ewHkv9vLloTwhsKn5d4bTdoqsmBfyfooQX5O7EQLv5TtBZ1CwcvjU063xndfqI8U=wKHHp1xupEN1UoR2CPDlg9US2Vs3om5ld5YqSsWMQX8=S513DSbbTVeI1oTfyBzCAl9XxIzlSovqUwCuGBPsJNs=267-fKVIV4ARTykAk4sL7fYgYL7oY_bg8lCAJNgLlLo=>
>  | Facebook 
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.facebook.com_Fiserv_=DwMFaQ=ewHkv9vLloTwhsKn5d4bTdoqsmBfyfooQX5O7EQLv5TtBZ1CwcvjU063xndfqI8U=wKHHp1xupEN1UoR2CPDlg9US2Vs3om5ld5YqSsWMQX8=S513DSbbTVeI1oTfyBzCAl9XxIzlSovqUwCuGBPsJNs=BflX7Vnqs6r4WbYbx7C_Q1wOA59Jr3qDCl7ncWbrkzI=>
> FORTUNE Magazine World's Most Admired Companies® 2014 | 2015 | 2016 | 2017 | 
> 2018 | 2019
> © 2019 Fiserv, Inc. or its affiliates. Fiserv is a registered trademark of 
> Fiserv, Inc. Privacy Policy 
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__fiserv.com_about_privacypolicy.aspx=DwMGaQ=ewHkv9vLloTwhsKn5d4bTdoqsmBfyfooQX5O7EQLv5TtBZ1CwcvjU063xndfqI8U=DhuxLF27WYthG-oLriBpSFsbSw1I5i2d895wxh8VY20=BIEknoCR_FKxnPpOj0Y-Bee0m8jPof-E79bKGvU0u9U=b7HNbknTsxVdKPi_kXbH7pz31TqQJrr_tXTxCa1kKPI=>
>  
> 
>  
> 
>  
> The information in this message may be proprietary and/or confidential, and 
> protected from disclosure. If the reader of this message is not the intended 
> recipient, or an employee or agent responsible for delivering this message to 
> the intended recipient, you are hereby notified that any dissemination, 
> distribution or copying of this communication is strictly prohibited. If you 
> have received this communication in error, please noti

Re: [osgi-dev] DB2 Enroute OSGi Connectivity

2020-01-06 Thread Tim Ward via osgi-dev
Hi Kevin,

Unfortunately you are correct that DB2 doesn’t provide an OSGi-enabled version 
of its driver, however it is possible to make it work (with a little effort).

Implementing the JDBC Service specification yourself is pretty trivial (all you 
need to do is to register a single service implementing up to four methods), 
and would allow you to embed the DB2 driver in the bundle that you write. 

There is also a JDBC service implementation at PAX-JDBC 
,
 however this still requires you to wrap the DB2 Jar into an OSGi bundle.

I hope this helps,

Tim

> On 2 Jan 2020, at 13:37, Matthews, Kevin via osgi-dev 
>  wrote:
> 
> Hello, we are looking to modularizing our legacy application using OSGI to 
> connect to DB2 database . Has anyone successfully use OSGI enroute to connect 
> to a DB2 database? I have used  PostgreSQL and MySQL which already comes with 
> OSGI bundle manifest but it seems DB2 doesn’t have the bundle manifest.
>  
> Thanks,
>  
> Kevin Matthews
> Sr Application Analyst – Rapid Connect Development
> Global Business Solutions
> Office: 954-845-4222 | Mobile: 561-465-6694
> kevin.matth...@fiserv.com 
>  
> 
>  
> Fiserv  | Join Our Team 
>  | Twitter  | 
> LinkedIn  | Facebook 
> 
> FORTUNE Magazine World's Most Admired Companies® 2014 | 2015 | 2016 | 2017 | 
> 2018 | 2019
> © 2019 Fiserv, Inc. or its affiliates. Fiserv is a registered trademark of 
> Fiserv, Inc. Privacy Policy 
> 
>  
> 
>  
> 
>  
> The information in this message may be proprietary and/or confidential, and 
> protected from disclosure. If the reader of this message is not the intended 
> recipient, or an employee or agent responsible for delivering this message to 
> the intended recipient, you are hereby notified that any dissemination, 
> distribution or copying of this communication is strictly prohibited. If you 
> have received this communication in error, please notify Fiserv immediately 
> by replying to this message and deleting it from your computer.
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org 
> https://mail.osgi.org/mailman/listinfo/osgi-dev 
> 
___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] @ConsumerType vs @ProviderType

2019-10-24 Thread Tim Ward via osgi-dev
Hi Scott,

As the author of the Transaction Control specification I can attempt to explain 
to you the reasoning for the example that is confusing you.

In the Transaction Control specification there are three primary sets of actors:

People implementing the core TransactionControl service (i.e. the transaction 
management piece)
People implementing Transactional Resources (e.g. JDBC wrappers, JPA wrappers, 
JMS wrappers, Transaction Scoped objects)
People using the Transaction Control Service to do transactional work

In many cases there will be overlaps in these groups, but broadly speaking the 
intent of the specification is that people providing transactional resources 
are different from people providing the core transaction service. It is also 
expected (hoped) that there will be many more people providing transactional 
resource providers than there are providing Transaction Control Services. 

For these reasons the generic ResourceProvider interface is annotated with 
@ConsumerType, specifically so users can provide their own custom resource 
providers without becoming providers of the Transaction Control specification.

Now, to answer your question - if you have a specialisation of a @ConsumerType 
interface (as JDBCConnectionProvider and JPAEntityManagerProvider both are) 
then what should the status of that interface be? The general wisdom is that 
Transaction Control Resource Provider specialisations are explicitly supposed 
to be used for reifying the type of the ResourceProvider. This doesn’t change 
the usage intent of the ResourceProvider, and therefore we don’t change the 
@ConsumerType.

On the other hand the various ResourceProvider Factory services *do* feature in 
the specification in the way that other “provider” services do - there are 
rules about what properties you need to support in the configuration maps etc. 
Also, unlike the ResourceProvider interfaces, it’s possible to implement the 
factory service without implementing other Transaction Control interfaces 
(implementing JDBCConnectionProvider requires implementing ResourceProvider). 
We very much want to avoid coupling which would mean that existing Resource 
Provider implementations would not work with a version 1.1 of the Transaction 
Control specification.

In summary - JDBCConnectionProvider is @ConsumerType because ResourceProvider 
is @ConsumerType. ResourceProvider is @ConsumerType because providing a 
transactional resource is not the same as providing a Transaction Control 
service. JDBCConnectionProviderFactory is @ProviderType because by implementing 
it you are a provider of the standardised configuration factory for 
Transactional JDBC resources.

I hope this makes sense.

All the best,

Tim

> On 24 Oct 2019, at 03:53, Leschke, Scott via osgi-dev 
>  wrote:
> 
> First I’d like to thank everybody for their thoughtful responses. I’m sorry 
> that this response was so long in coming. Unfortunately, the demands of my 
> day-to-day, which is unrelated to anything OSGi, has taken quite a bit of my 
> time lately.
>  
> I found all the responses very helpful but I must say that I found Peter’s 
> response below to be extremely helpful especially the last two paragraphs. I 
> do a few question have a specific example taken from the R7 OSGi Compendium 
> spec that I still find confusing though.
>  
> Package org.osgi.service.transaction.control.jdbc provides two interfaces, 
> JDBCConnectionProvider and JDBCConnectionProviderFactory.  The latter is 
> @ProviderType as I’d expect but the former is @ConsumerType.  As a consumer 
> of this API package, my expectation is that I won’t be implementing 
> JDBCConnectionProvider but the API provider, Apache Aries for example, will 
> be so that it should be @ProviderType instead of @ConsumerType since adding 
> methods to this interface won’t impact me as a consumer.
>  
> Am I making any sense? Is it possible for an API to expose an @ConsumerType 
> that isn’t implemented or extended by the consumer role? It would seem not. 
> If that’s the case, is the primary reason the default is @ConsumerType 
> (rather than @ProviderType) because it’s the more conservative choice, rather 
> than the more common one because in my experience, the vast majority of types 
> in an API are implemented by the provider rather than the consumer.
>  
> Thanks so much, I hope I’m not wasting anybody’s time here.
>  
> Scott
>  
> From: Peter Kriens  
> Sent: Thursday, October 17, 2019 12:17 PM
> To: Leschke, Scott ; OSGi Developer Mail List 
> 
> Subject: Re: [osgi-dev] @ConsumerType vs @ProviderType
>  
> It is surprisingly simple. :-)
>  
> Let's assume Oracle adds a new method to `java.nio.file.Path`. Would you 
> care? Unless you work for Azul, you'd like couldn't give a rats ass. Once you 
> use that new method you care, but before that moment it is irrelevant to you. 
> That makes you a _consumer_ of the `java.nio.file` package. Azul and Oracle 
> are, however, _providers_ of this package. 

Re: [osgi-dev] Configurator resources that depend on a ConfigurationPlugin

2019-10-08 Thread Tim Ward via osgi-dev
> My question is, how can I tell to the Configurator bundle to not process 
> resources that contains placeholder until my ConfigurationPlugin is up? 

There are ways that you could attempt to do this, however they’re all inelegant 
and error prone. What would make more sense would be for the 
ConfigurationPlugin to detect the existing configurations which contain 
placeholders at startup and trigger an update for them. This will cause the 
configuration to be re-delivered, including any necessary configuration plugin 
execution.

In general you are better off trying to make things ordering independent rather 
than to control the order that things happen in. The result is a much more 
flexible and stable system.

Best Regards,

Tim

> On 8 Oct 2019, at 12:54, BJ Hargrave via osgi-dev  
> wrote:
> 
> Configuration Plugins mutate configuration data each time it is delivered to 
> a configuration target. So the Configuration Plugin must be active before any 
> configuration targets which care about the mutated configuration data.
>  
> So this is orthogonal to Configurator which is about putting configuration 
> data in the CM configuration data store.
> --
> 
> BJ Hargrave
> Senior Technical Staff Member, IBM // office: +1 386 848 1781
> OSGi Fellow and CTO of the OSGi Alliance // mobile: +1 386 848 3788
> hargr...@us.ibm.com
>  
>  
> - Original message -
> From: "Clément Delgrange via osgi-dev" 
> Sent by: osgi-dev-boun...@mail.osgi.org
> To: OSGi Developer Mail List 
> Cc:
> Subject: [EXTERNAL] [osgi-dev] Configurator resources that depend on a 
> ConfigurationPlugin
> Date: Tue, Oct 8, 2019 06:08
>  
> Hi all,
>  
> I have a question regarding the Configurator and the ConfigurationPlugin 
> spec. I would like to provision my application with configurations as I do 
> with my the bundles, for this the Configurator seems perfect. But, the values 
> inside my configurations could be different depending of the environment 
> (dev, beta, prod, ...) and my configurations may contain sensitive data that 
> I don't want in my Git repo. In this case I think I could provide a 
> ConfigurationPlugin which will replace placeholders with data coming from a 
> database.
>  
> My question is, how can I tell to the Configurator bundle to not process 
> resources that contains placeholder until my ConfigurationPlugin is up?
>  
> Thanks,
>  
> Clément Delgrange.
>  
>  
>  
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev 
>  
>  
> 
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Deactivating a component manually

2019-09-23 Thread Tim Ward via osgi-dev
Hi Alain,

This sounds exactly like the problem that Open Security Controller had when 
using Vaadin. They wanted their various UI types to be DS components so that 
they could use various services to talk to the rest of the system. Obviously 
the UI types need to be prototype scope because each user session needs to have 
a different instance. The UI type instances then get “disposed” by the UI when 
the user navigates away or closes their browser. This leaves a lifecycle hole 
because the UI type instance then needs to be “released” from the OSGi Service 
Reference that created it.

The way that they solved the problem was as follows:

A singleton top-level DS component was created 

 to be a factory for all User UI sessions. This forms the entry into the 
prototype lifecycle UI instances
A “wrapper” was created to fit into the Vaadin UI 
.
 This allowed a DS component to wrap a ComponentServiceObjects 

 into a UI factory type which could be passed to the UI framework. Each UI view 
can add lots of these
When the wrapper creates an instance a “detach listener” is added  
which
 releases the service when the UI instance is disposed
The UI framework disposes of instances, which releases them from the 
ComponentServiceObjects. If a high level component is disposed then all of the 
other instances it created are too

The end result is that the prototype scope instances are automatically released 
when the UI disposes them. This will also run any necessary deactivation code 
in the DS component instance being released.

Hopefully this makes sense to you, and might provide a route forward.

All the best,

Tim Ward

> On 22 Sep 2019, at 10:43, Alain Picard via osgi-dev  
> wrote:
> 
> No, we had this issue before we (just) upgraded to 2.1.14 and had a number of 
> work around in our code to cover that. 
> 
> Here really the issue is that the trigger to "destroy" comes from outside and 
> we need to inform SCR that the component is/should be deactivated.
> 
> Alain
> 
> 
> On Sat, Sep 21, 2019 at 5:22 PM Raymond Auge via osgi-dev 
> mailto:osgi-dev@mail.osgi.org>> wrote:
> I'm wondering if you might be suffering from this Apache Felix SCR bug: 
> https://issues.apache.org/jira/plugins/servlet/mobile#issue/FELIX-5974 
> 
> 
> - Ray
> 
> On Sat, Sep 21, 2019, 11:53 Alain Picard via osgi-dev, 
> mailto:osgi-dev@mail.osgi.org>> wrote:
> Ray,
> 
> The service being "destroyed" is BaViewPointsViewModel and you can see that 
> ViewpointsViewModeTabViewModel has an instance of it. So I want to make sure 
> that it is correctly released so that SCR can then deactivate 
> ViewpointsViewModeTabViewModel and whatever else depended on 
> ViewpointsViewModeTabViewModel and on and on.
> 
> Alain
> 
> 
> On Sat, Sep 21, 2019 at 10:34 AM Raymond Auge via osgi-dev 
> mailto:osgi-dev@mail.osgi.org>> wrote:
> Hey Alain,
> 
> Just trying to understand the use case better, so a couple questions:
> 
> Since your component is prototype scope, and if no one has any instances of 
> it why bother disabling it, isn't it effectively only a fairly inert service 
> reference at that point?
> Are you saying that when released as a prototype instance, it should never be 
> used again, ever?
> Perhaps the service you described above could be a factory for instances of 
> `ZkViewModel, BaItem, MasterDetailTopMenuListener` instead of being one 
> itself.
> 
> - Ray
> 
> On Sat, Sep 21, 2019 at 5:05 AM Alain Picard via osgi-dev 
> mailto:osgi-dev@mail.osgi.org>> wrote:
> I'm facing a case where the UI framework is sending a destroy request when a 
> page is destroyed and I want to use that to also deactivate the component, so 
> that its "host" can then automatically get deactivated and so on so forth as 
> needed. 
> 
> As shown below I tried to use disableComponent. That results in some errors 
> as it runs under [SCR Component Actor] thread that is not session aware and 
> also looking at the stack trace it seems to be deactivating the full user 
> session, which is not what I'm expecting.
> 
> So am I deactivating correctly here, how can I make sure this runs in a 
> session aware thread (as I don't control here this separate thread 
> launch/run) and is there a utility to better understand service instance 
> dependencies that would allow tracking the impact of a deactivation.
> 
> 

Re: [osgi-dev] Enroute Tutorial - generating indexes

2019-08-22 Thread Tim Ward via osgi-dev
Hi Kevin,

Have you installed Bndtools? If not then it is highly recommended to make your 
development more productive. You can see how it is used in 
https://enroute.osgi.org/tutorial/020-tutorial_qs.html#resolving-the-application
 

 - you can also drag and drop bundles from the repository view into the run 
requirements.

I hope this helps,

Best Regards,

Tim

> On 22 Aug 2019, at 17:20, Matthews, Kevin via osgi-dev 
>  wrote:
> 
> Hello,
> I am using eclipse and osgi archetypes as define in osgi enroute tutorial. 
> When I run mvn package to generate my indexes and resolve requirement 
> capabilities using both eclipse and windows cmd, is there an easier way to 
> add the required dependencies to the run bundles and run requirements? Or do 
> we have to manually look  at all the dependencies and add to my run bundle?
>  
>  
> [ERROR] Resolution failed. Capabilities satisfying the following requirements 
> could not be found:
> [<>]
>   ? osgi.identity: (osgi.identity=com.abc.service.app)
> [org.apache.aries.jpa.container version=2.7.0]
>   ? osgi.service: (objectClass=javax.persistence.spi.PersistenceProvider)
> The following requirements are optional:
> [tx-control-provider-jdbc-xa version=1.0.0]
>   ? osgi.service: (objectClass=org.osgi.service.jdbc.DataSourceFactory)
> [org.apache.aries.jax.rs.whiteboard version=1.0.1]
>   ? osgi.extender: (osgi.extender=osgi.serviceloader.registrar)
> [com.abc.acm.cc.cm-service version=0.0.1.201908221510]
>   ? osgi.service: (osgi.jaxrs.media.type=application/json)
> [org.apache.felix.scr version=2.1.10]
>   ? osgi.wiring.package: 
> (&(osgi.wiring.package=org.apache.felix.shell)(&(version>=1.0.0)(!(version>=1.1.0
>   ? osgi.wiring.package: 
> (&(osgi.wiring.package=org.apache.felix.service.command)(&(version>=1.0.0)(!(version>=2.0.0
>   ? osgi.wiring.package: 
> (&(osgi.wiring.package=org.osgi.service.metatype)(&(version>=1.2.0)(!(version>=2.0.0
>   ? osgi.wiring.package: 
> (&(osgi.wiring.package=org.osgi.service.cm)(&(version>=1.6.0)(!(version>=2.0.0
> [org.apache.geronimo.specs.geronimo-saaj_1.3_spec version=1.1.0]
>   ? osgi.wiring.package: 
> (&(osgi.wiring.package=org.apache.geronimo.osgi.registry.api))
> [ch.qos.logback.core version=1.2.3]
>   ? osgi.wiring.package: (&(osgi.wiring.package=org.codehaus.janino))
>   ? osgi.wiring.package: 
> (&(osgi.wiring.package=org.codehaus.commons.compiler))
>   ? osgi.wiring.package: 
> (&(osgi.wiring.package=org.fusesource.jansi)(&(version>=1.9.0)(!(version>=2.0.0
>   ? osgi.wiring.package: (&(osgi.wiring.package=javax.mail.internet))
>   ? osgi.wiring.package: 
> (&(osgi.wiring.package=javax.servlet)(&(version>=3.1.0)(!(version>=4.0.0
>   ? osgi.wiring.package: (&(osgi.wiring.package=javax.mail))
>   ? osgi.wiring.package: 
> (&(osgi.wiring.package=javax.servlet.http)(&(version>=3.1.0)(!(version>=4.0.0
> [tx-control-provider-jpa-xa version=1.0.0]
>   ? osgi.service: (objectClass=org.osgi.service.jdbc.DataSourceFactory)
>   ? osgi.service: 
> (objectClass=org.osgi.service.jpa.EntityManagerFactoryBuilder)
> [org.apache.felix.configadmin version=1.9.8]
>   ? osgi.wiring.package: 
> (&(osgi.wiring.package=org.osgi.service.coordinator)(&(version>=1.0.0)(!(version>=2.0.0
> [ch.qos.logback.classic version=1.2.3]
>   ? osgi.wiring.package: 
> (&(osgi.wiring.package=org.codehaus.groovy.runtime.callsite)(&(version>=2.4.0)(!(version>=3.0.0
>   ? osgi.wiring.package: 
> (&(osgi.wiring.package=org.codehaus.groovy.runtime.wrappers)(&(version>=2.4.0)(!(version>=3.0.0
>   ? osgi.wiring.package: (&(osgi.wiring.package=sun.reflect))
>   ? osgi.wiring.package: 
> (&(osgi.wiring.package=javax.servlet)(&(version>=3.1.0)(!(version>=4.0.0
>   ? osgi.wiring.package: 
> (&(osgi.wiring.package=org.codehaus.groovy.reflection)(&(version>=2.4.0)(!(version>=3.0.0
>   ? osgi.wiring.package: 
> (&(osgi.wiring.package=org.codehaus.groovy.runtime.typehandling)(&(version>=2.4.0)(!(version>=3.0.0
>   ? osgi.wiring.package: 
> (&(osgi.wiring.package=org.codehaus.groovy.runtime)(&(version>=2.4.0)(!(version>=3.0.0
>   ? osgi.wiring.package: 
> (&(osgi.wiring.package=groovy.lang)(&(version>=2.4.0)(!(version>=3.0.0
>   ? osgi.wiring.package: 
> (&(osgi.wiring.package=org.codehaus.groovy.control.customizers)(&(version>=2.4.0)(!(version>=3.0.0
>   ? osgi.wiring.package: 
> (&(osgi.wiring.package=org.codehaus.groovy.control)(&(version>=2.4.0)(!(version>=3.0.0
>   ? osgi.wiring.package: 
> (&(osgi.wiring.package=javax.servlet.http)(&(version>=3.1.0)(!(version>=4.0.0
>   ? osgi.wiring.package: 
> (&(osgi.wiring.package=org.codehaus.groovy.transform)(&(version>=2.4.0)(!(version>=3.0.0
> [org.apache.felix.http.jetty version=4.0.6]
>

Re: [osgi-dev] Migrating a service provider interface to OSGi that's similar to SPI

2019-08-22 Thread Tim Ward via osgi-dev
Hello,

When you say that each client needs a different service instance, would a 
bundle need more than one instance (i.e. would a ServiceFactory be sufficient 
or does it need to be a PrototypeServiceFactory)?

Also, where are the Strings in the String array coming from? If those are 
passed in by the client then you may need an intermediate “factory service” 
from which the bundles request instances of the provider.

Otherwise, it sounds as though this would be pretty simple to achieve. The 
ServiceFactory gives you access to the requesting bundle, which in turn gives 
you the class loader that you need.

All the best,

Tim 

> On 22 Aug 2019, at 05:58, Peter Firmstone via osgi-dev 
>  wrote:
> 
> Hello,
> 
> I'm trying to migrate a custom provider interface to register OSGi services.
> 
> This is a pre existing implementation, functionally similar to Java's SPI, 
> with one caveat; it doesn't use a zero arg constructor.
> 
> The constructor has two arguments an array of strings, and a ClassLoader.
> 
> Otherwise for all intents and purposes, it's an SPI, implementing services 
> use a META-INF services file.
> 
> The interface for the service exists and has implementations.
> 
> The ClassLoader passed in is used to resolve classes from the service client.
> 
> It doesn't use the Java SPI mechanism, so we have access to the code that 
> creates the service.
> 
> Each client will require a different Service instance.
> 
> I was thinking something like a Service Factory might do the job, any 
> thoughts or advice?
> 
> Thanks in adv,
> 
> Peter.
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Using Gradle or Maven on a new OSGi project

2019-07-25 Thread Tim Ward via osgi-dev
Interestingly this is the opposite conclusion that most people come to. Until 
recently Bndtools did not support Maven at all and was 100% Gradle. There has 
been a lot of work to bring Maven support up to the same level as Gradle by the 
team, but I don’t think that many of us would say that Maven support was at 
parity yet, let alone better.

You absolutely do get live code deployment when using Bndtools + Gradle (Maven 
only recently got this feature and Gradle has had it for years). Live 
baselining in Eclipse is still only available with Gradle, as are the 
quick-fixes for lots of bed-detected problems. 

You are correct that IntelliJ is more Maven-focussed, but that is because it 
doesn’t have additional plugins like Bndtools, so you’re just getting the 
support they have for Maven.

When it comes to Karaf, that isn’t really part of Bndtools. The Karaf project 
has always been heavily Maven-based, and so if you want to use their tools then 
Maven is probably the way to go.

All the best,

Tim

> On 25 Jul 2019, at 14:31, Stephen Schaub via osgi-dev 
>  wrote:
> 
> A brief follow-up to this thread, after another month into my project:
> 
> I have found that although Gradle will work fine as a build tool for OSGi, it 
> does seem that Maven is better supported for OSGi development in Eclipse. For 
> example, the Eclipse bndtools plugins support live code deployment if you're 
> using Maven, but not Gradle. I have also seen a post describing doing live 
> code deployment from IDEA that requires Maven. So, I conclude that Maven is 
> definitely preferred over Gradle when it comes to OSGi IDE tooling.
> 
> Also, although there is a Gradle plugin for generating kar archives for 
> Karaf, I have encountered issues using it with current versions of Gradle.
> 
> Finally, many OSGi examples I find online seem to be using Maven rather than 
> Gradle as the build tool.
> 
> These issues have not caused me to abandon Gradle, because I prefer it to 
> Maven, and I am grateful that the bnd project continues to have great support 
> for Gradle. However, overall, I am left with the impression that there is 
> better support for Maven than for Gradle in the broader OSGi ecosystem. 
> 
> Stephen
> 
> On Tue, Jun 25, 2019 at 10:11 AM Stephen Schaub  > wrote:
> Thanks to all for the helpful responses. I was concerned about using Gradle 
> as a build tool because so many OSGi resources I was finding seemed to be 
> using Maven, and the change of enRoute docs from Gradle to Maven seemed to 
> communicate a move away from Gradle as a "preferred" build tool. But given 
> that Maven still seems to be the dominant build tool in the Java world, I can 
> understand the rationale for transitioning enRoute from Gradle to Maven. 
> Also, I can understand that maintaining both Maven and Gradle versions of 
> enRoute would be a burden. 
> 
> Stephen
> 
> On Mon, Jun 24, 2019 at 4:28 PM Stephen Schaub  > wrote:
> I'm new to OSGi and am starting a project. I found the enRoute material and 
> noticed that the enRoute tutorials apparently at one time utilized Gradle as 
> the build tool, but are now using Maven. 
> 
> I'm more familiar with Gradle and have worked out how to use Gradle to do 
> what I need for the project, but I was wondering 1) why the switch from 
> Gradle to Maven for enRoute and 2) is Maven the preferred build tool for OSGi 
> going forward? Is there a reason I should consider switching to Maven?
> 
> I've poked through the mailing list archives trying to find answers to these 
> questions but can't seem to find a record of any discussions about this, so 
> am hoping someone can shed some light for me.
> 
> -- 
> Stephen Schaub
> 
> 
> 
> 
> -- 
> Stephen Schaub
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org 
> https://mail.osgi.org/mailman/listinfo/osgi-dev 
> 
___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] OSGi book recommendations

2019-07-25 Thread Tim Ward via osgi-dev
As far as English Language books go, I’m not aware of anything that fits the 
bill. Enterprise OSGi in Action is probably the most “up to date” of the books, 
but it uses Blueprint. OSGi in Depth mostly focuses on the low-level APIs 
(which I would definitely not recommend using), OSGi in Action uses Declarative 
Services, but pre-dates the annotations. 

The Spring-centric books are probably best avoided at this point as Spring DM 
server hasn’t existed for some time.

If you find anything useful then do let me know.

Tim


> On 25 Jul 2019, at 14:38, Stephen Schaub via osgi-dev 
>  wrote:
> 
> I'm looking for a recent book on OSGi to recommend to new OSGi developers. 
> Something that takes a Declarative Services annotation approach from the 
> beginning, and uses current recommended tools and best practices.
> 
> Most of the books on the OSGi recommended books list seem to be several years 
> old:
> 
> https://www.osgi.org/developer/resources/books/ 
>  
> 
> I saw that Neil Bartlett was starting a new book titled Effective OSGi a few 
> years ago, but don't see that it's out yet.
> 
> Any recommendations?
> 
> Stephen
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Micro version ignored when resolving, rationale?

2019-06-20 Thread Tim Ward via osgi-dev
Right, so lets look at these one by one:

>  "The importer's version range matches the exporter's version. See Semantic 
> Versioning 
> .”
>  The reference to semantic versioning makes the content of that section part 
> of the specification of the resolving process. But as you and others have 
> pointed out, build/packaging and resolving are "very separate steps". How to 
> apply exactly the information from section "Semantic Versioning" to the 
> resolving process (i.e. to matching the importer's and exporter's versions) 
> is left open to interpretation. Reading the specification, I was unsure if 
> the recommendation to ignore the micro part should be applied to the 
> resolving process. This caused a lot of confusion on my side. Unless 
> everybody agrees that reading the specification in this way is my fault only 
> and the intended meaning is obvious to everybody else -- even without a 
> decade of OSGi experience -- I think there is room for improvement.
> 
So I think your confusion here comes from the difference between “normative” 
statements, which lay out the strict rules that must be followed, and 
“recommendations” (usually following the word “should”). A normative definition 
of version range syntax and meaning is available in Version Ranges 
.
 This strictly defines what version ranges are, including what matches and what 
doesn’t. You will note that micro versions are *never* ignored. If your version 
range puts a micro version into its floor or ceiling then that defines the 
matching space.

The Semantic Versioning section starts with the following statement: 

"Though the OSGi frameworks do not enforce a specific encoding for a 
compatibility policy, it is strongly recommended to use the following 
semantics."

This is the first big hint to say “this is a non-normative part of the spec, 
it’s useful information but doesn’t restrict what you do”. The end of the 
section goes on to recommend:

"Both consumers and providers should use the version they are compiled against 
as their base version. It is recommended to ignore the micro part of the 
version because systems tend to become very rigid if they require the latest 
bug fix to be deployed all the time. For example, when compiled against version 
4.2.1.V201007221030, the base version should be 4.2.

A consumer of an API should therefore import a range that starts with the base 
version and ends with the next major change, for example:[4.2,5). A provider of 
an API should import a range that starts with the base version up to the next 
minor change, for example:[4.2,4.3)."

This exactly the default that bnd provides when packaging your bundle. It tries 
to find the lowest version bundle to compile against, then sets the version 
range in your Import Package statement appropriately. This default behaviour 
what you need to override to get a minimum import version including a micro 
version.

> Applying the information from the section "Semantic Versioning" to the 
> build/packaging process, I stumble about the recommendation "to ignore the 
> micro part of the version because systems tend to become very rigid if they 
> require the latest bug fix to be deployed all the time." Sorry for being 
> obstinate, but when you read this and have my use case in mind (trying to 
> make sure that the resolver rules out everything below a specific bug fix 
> version) this recommendation simply does not make sense. And there is no 
> mentioning that specifying a lower bound with a micro part greater zero might 
> make sense (as everybody seems to be able to agree upon -- unless we are 
> talking about the special case of a bundle that only contains interfaces and 
> DTOs). Specifying a lower bound with a micro part effectively adds to the 
> information about two bundle's (their packages' to be precise) API 
> compatibility a "hint" for the resolver to only consider a subset of (API 
> compatible) providing bundles. This limits the flexibility (the resolver is 
> no longer allowed to compute a result with the buggy version(s) of the 
> referenced packages) but this is a limitation I intend. Of course, the spec 
> only "recommends" ignoring the micro part. But if a specification provides a 
> recommendation, it should also mention the scope to which it applies. And it 
> doesn't apply to my use case, which I -- obviously -- wouldn't consider 
> "exotic".
> 

The key part of this is that "Of course, the spec only "recommends" ignoring 
the micro part”. You are free to ignore recommendations when they don’t fit 
what you need. The expectation that a micro version greater than zero is a 
valid thing to do is implicit because
The version range syntax explicitly includes it and describes how to do it
The Semantic Versioning section recommends, rather than requires setting 

Re: [osgi-dev] Micro version ignored when resolving, rationale?

2019-06-18 Thread Tim Ward via osgi-dev
Michael, 

I promise that we are trying to help, but there’s clearly quite a bit of 
confusion here. I’ll try to take the questions individually, and we can attempt 
to separate your questions about build/packaging (i.e. creating your bundle) 
and resolving (i.e. creating your deployment). These are two very separate 
steps.


> There aren't any more details. I was seeing a resolver result that included 
> bundle A-2.0.1 (which exports packages with version 2.0.1) although there was 
> another bundle in the same result set that had Import-Package statements with 
> versions [2.0.3,3) for the packages provided by bundle A.
> 
Ok, did the resolve result also include A-2.0.3, or perhaps some other bundle 
else providing the package A in the range [2.0.3,3)? 

If not then this is a serious bug because the resolver should never return an 
“invalid” result (i.e. one which can’t resolve when installed in an OSGi 
framework). The only way that B will resolve is if someone provides the package 
A in the range [2.0.3,3).

If the resolve result *did* also include A-2.0.3 then the question is why did 
it *also* include A-2.0.1 - there could be a lot of reasons for this, some of 
which mean that it *must* be deployed (some other else with a tight micro range 
on its import). Failing that we do our best to avoid duplication (of symbolic 
name) in the results, but the resolution process is complex and it can be hard 
to eliminate valid (i.e. it would deploy and resolve) but “unexpected” (it 
doesn’t contain what you think it should, or contains more than you think it 
should) results. In these cases it’s usually best to make sure that stuff you 
don’t want to deploy is removed from your repository, which makes it impossible 
for the resolver to pick them.

> I wondered whether this was to be expected or a bug.
> 

That really depends on the specifics of what you saw. If deploying the resolved 
list of bundles into a framework fails to resolve then it is definitely a bug. 
Most (but not all) of the other options boil down to errors in usage, the fact 
that the application really does need the extra bundle (despite what you may 
believe), or more rarely (because this is tested and used quite a lot) bugs in 
the resolver.

The only way we can be sure which of these scenarios is happening is to see the 
repository and the bndrun. This is why we keep asking if you can show them to 
us so we can help to work out whether what you’re seeing is “correct”, “correct 
but suboptimal”, or just plain “wrong”.

> If I know some bundle (one of its exported packages to be precise) to be 
> buggy up to (and including) version x.y.z then why should I not explicitly 
> reference x.y.z+1 (i.e. the first fixed version) as a requirement for the 
> Import-Package?
> 
You absolutely can have a dependency on a micro version, but this isn’t a 
resolver issue, it’s a build/packaging issue. That’s the point at which you 
define the version range for your import package. If you know that you need a 
specific bug fix package then by all means put that as your base version. The 
recommendation to avoid micro versions in the import range still applies though 
as tight restrictions limit deployment flexibility. You obviously understand 
this on some level because you haven’t been trying to limit the upper version 
to A-2.0.4, but instead to A-3.
> Why doesn't it make sense to deploy the latest bug fix all the time? Why 
> should I deploy an older version that is known to show bugs?
> 

As Peter said back down the mail chain, it makes sense to *compile* against the 
lowest version that will work and *deploy* the highest version that will work. 
This ensures that your bundles will work in the maximum number of deployment 
scenarios, while still getting the bug fixes.

I haven’t seen anyone say that you shouldn’t be deploying the latest bug fix, 
and the bnd resolver prefers higher versions over lower versions where it has a 
choice. 

I hope this helps,

Tim


> On 18 Jun 2019, at 16:17, Michael Lipp via osgi-dev  
> wrote:
> 
> 
>>  
>> Can we please step back to the beginning and describe, in much more detail, 
>> the issue you are seeing? I never understood the actual issue you seemed to 
>> be having.
> There aren't any more details. I was seeing a resolver result that included 
> bundle A-2.0.1 (which exports packages with version 2.0.1) although there was 
> another bundle in the same result set that had Import-Package statements with 
> versions [2.0.3,3) for the packages provided by bundle A.
> 
> I wondered whether this was to be expected or a bug. The resolver process 
> description in the spec says "The importer's version range matches the 
> exporter's version. See Semantic Versioning 
> ."
>  
> [https://osgi.org/specification/osgi.core/7.0.0/framework.module.html#framework.module.resolvingprocess
>  
> 

Re: [osgi-dev] Micro version ignored when resolving, rationale?

2019-06-18 Thread Tim Ward via osgi-dev
>> Considering this, lowering a lower bound of an Import-Package statement when 
>> resolving should be acknowledged as a bug. 
>> 

Bnd does not alter the version used when resolving, as this would give 
“incorrect” answers that don’t resolve. The resolver must use the exact 
metadata from the bundle manifest, as that it what the framework uses at 
runtime. 

What Peter said is that bnd ignores the micro version at *build* time when 
*generating* your manifest. This is why one of the first things that I asked 
was “are you sure that your package import is for the micro version you want”. 
It is possible to make bnd apply micro versions in its generated imports, but 
it isn’t the default. My guess is that your bundle doesn’t really require what 
you think it requires.

If your bundle is built with a range that doesn’t include micro version numbers 
in its imports then there is no restriction for the resolver to use in 
deciding. At that point the bnd resolver heuristics will kick in to *try* to 
give you the highest version that satisfies your dependencies, but it may not 
be possible in all cases (this goes back to my well-curated repository 
argument). If you are able to share the workspace, or even just the repository 
index + your initial requirements, then we could probably attempt tell you why 
it’s picking what it’s picking. It’s non-trivial to reverse engineer though.

All the best,

Tim

> On 18 Jun 2019, at 09:54, Michael Lipp via osgi-dev  
> wrote:
> 
> 
>>> Considering this, lowering a lower bound of an Import-Package statement 
>>> when resolving should be acknowledged as a bug. 
>>> 
>> I beg to differ ...
>> 
>> As said, you can set the consumer/provider policy to your desired strategy.
>> 
> So having default settings in the tool that cause a behavior that does not 
> comply with the specification should not be considered a bug?
> 
>  - Michael
> 
> 
> 
>> 
>> Kind regards,
>> 
>>  Peter Kriens
>> 
>>> On 18 Jun 2019, at 10:33, Michael Lipp mailto:m...@mnl.de>> 
>>> wrote:
>>> 
>>> 
 
 I expect there are two things at play. First, OSGi specifies things as you 
 indicate. An import of [1.2.3.qualifier,2) must not select anything lower 
 than 1.2.3.qualifier. Second, bnd does have heuristics that do drop the 
 qualifier and micro part in calculating the import ranges from the exports 
 on the class path.
>>> Thanks for the clarification, I think this explains things.
>>> 
 [...]
 
 Conclusion, the spec is perfect but the implementations apply heuristics 
 and may have bugs.
>>> The specification says (or defines, if you like): "micro - A change that 
>>> does not affect the API, for example, a typo in a comment or a bug fix in 
>>> an implementation." It explicitly invites the developer to indicate a bug 
>>> fix by incrementing the micro part. There's no hint or requirement that he 
>>> should increment the minor part to reflect a bug fix. I do not find your 
>>> statement "The definition of the micro version is that it should not make a 
>>> difference in runtime" to be supported by the spec or the Semantic 
>>> Versioning Whitepaper. Actually, this interpretation would restrict the 
>>> usage of the micro part to documentation changes because every bug fix 
>>> changes the runtime behavior. This is, after all, what it is intended to do.
>>> 
>>> Considering this, lowering a lower bound of an Import-Package statement 
>>> when resolving should be acknowledged as a bug. 
>>> 
>>>  - Michael
>>> 
>>> 
>>> 
 
 Kind regards,
 
Peter Kriens
 
> On 17 Jun 2019, at 12:14, Michael Lipp via osgi-dev 
> mailto:osgi-dev@mail.osgi.org>> wrote:
> 
> Hi,
> 
> I have in my repository a bundle A-2.0.1 that exports packages with
> version 2.0.1 and a bundle A-2.0.3 that exports these packages with
> version 2.0.3. Version A-2.0.3 fixes a bug.
> 
> I have a bundle B that imports the packages from A with import
> statements "... version=[2.0.3,3)" because the bug fix is crucial for
> the proper working of B.
> 
> Clicking on "Resolve" in bndtools, I get a resolution with bundle
> A-2.0.1. I understand that this complies with the specification ("It is
> recommended to ignore the micro part of the version because systems tend
> to become very rigid if they require the latest bug fix to be deployed
> all the time.").
> 
> What I don't understand is the rationale. I don't see any drawbacks in
> deploying the latest bug fix. Of course, there's always the risk of
> introducing a new bug with a new version, even if it is supposed to only
> fix a bug in the previous version. But if you're afraid of this, you may
> also not allow imports with version ranges such as "[1.0,2)" (for
> consumers).
> 
> In my case, I now have to distribute bundle B with a release note to
> configure the resolution in such a way that only A 2.0.3 and up is used.

Re: [osgi-dev] Micro version ignored when resolving, rationale?

2019-06-17 Thread Tim Ward via osgi-dev
Hi

> I may have expressed myself badly. Nothing goes wrong, everything works
> according to the specification, which states that the micro part of the
> version should be ignored when resolving bundles.

….

> If OSGI didn't specify that the micro part should be dropped, then
> everything would be fine. Resolution would only be possible if at least
> version 2.0.3 of "A" was available. So, why do we have this
> "unreasonable" specification?

So while the semantic versioning recommendations do say that “the micro version 
is meaningless” when it comes to backward compatibility there isn’t anywhere 
that says a micro version should be ignored when resolving. In fact the OSGi 
framework *must not* ignore the micro version. If you say that you need 
"[1.2.3,1.2.4)” then you will get 1.2.3.

This is why I’m saying that your “unexpected” resolution is an indication that 
something is wrong, either in your inputs, or (less likely) in bnd somewhere.

Tim


> On 17 Jun 2019, at 23:14, Michael Lipp via osgi-dev  
> wrote:
> 
> Hi Tim,
> 
> I may have expressed myself badly. Nothing goes wrong, everything works
> according to the specification, which states that the micro part of the
> version should be ignored when resolving bundles.
> 
> My question is whether anybody can explain why the behavior was
> specified in such an "unreasonable" way. I tried to explain why I
> consider the behavior "unreasonable": if I author and deliver a bundle
> ("B" in my Mail) and I know that it only works with a specific
> "bug-fixed" version of another bundle ("A-2.0.3") then there is no way
> to enforce that this version is used by my customer.
> 
> "Import-Package" with "version=[2.0.3,3)" does not help, because the
> micro part is specified to be ignored. So effectively, requiring
> "version=[2.0.3,3)" is just like requiring "version=[2.0,3)" which makes
> the buggy versions "A-2.0.0" to "A-2.0.2" candidates for the resolution.
> 
> Of course, I can state in the release notes of bundle "B" that a user of
> the bundle must take care to not have a version below 2.0.3 in his
> repository. But honestly, if the user happens to already have e.g.
> version A-2.0.1 in his repository due to a requirement that existed
> before adding my bundle to his application, and he adds my bundle and
> everything seems to work fine, how probable is it that he reads the
> release notes when --maybe days later-- something goes wrong because of
> the bug in A-2.0.1?
> 
> If OSGI didn't specify that the micro part should be dropped, then
> everything would be fine. Resolution would only be possible if at least
> version 2.0.3 of "A" was available. So, why do we have this
> "unreasonable" specification?
> 
>  - Michael
> 
> 
> Am 17.06.19 um 15:17 schrieb Tim Ward:
>> Hi Michael,
>> 
>> I’m afraid that there’s quite a lot of missing information before I could 
>> come to a conclusion about what’s going on. What are your input 
>> requirements? Have you checked that B actually has the version range that 
>> you think it does? Are there two versions of A being deployed? If it’s 
>> possible to share the workspace then we might be able to bottom out what’s 
>> happening.
>> 
>> Also, if 2.0.1 of A is known to be broken then why do you have it in the 
>> repository that you are resolving against? The best defence against “bad” 
>> resolutions is to have a well curated repository. As with many things 
>> garbage in == garbage out.
>> 
>> All the best,
>> 
>> Tim
>> 
>>> On 17 Jun 2019, at 11:14, Michael Lipp via osgi-dev 
>>>  wrote:
>>> 
>>> Hi,
>>> 
>>> I have in my repository a bundle A-2.0.1 that exports packages with
>>> version 2.0.1 and a bundle A-2.0.3 that exports these packages with
>>> version 2.0.3. Version A-2.0.3 fixes a bug.
>>> 
>>> I have a bundle B that imports the packages from A with import
>>> statements "... version=[2.0.3,3)" because the bug fix is crucial for
>>> the proper working of B.
>>> 
>>> Clicking on "Resolve" in bndtools, I get a resolution with bundle
>>> A-2.0.1. I understand that this complies with the specification ("It is
>>> recommended to ignore the micro part of the version because systems tend
>>> to become very rigid if they require the latest bug fix to be deployed
>>> all the time.").
>>> 
>>> What I don't understand is the rationale. I don't see any drawbacks in
>>> deploying the latest bug fix. Of course, there's always the risk of
>>> introducing a new bug with a new version, even if it is supposed to only
>>> fix a bug in the previous version. But if you're afraid of this, you may
>>> also not allow imports with version ranges such as "[1.0,2)" (for
>>> consumers).
>>> 
>>> In my case, I now have to distribute bundle B with a release note to
>>> configure the resolution in such a way that only A 2.0.3 and up is used.
>>> Something that you would expect to happen automatically looking at the
>>> import statement. And if I want to make sure that the release note is
>>> not overlooked, the only way seems to be to 

Re: [osgi-dev] Micro version ignored when resolving, rationale?

2019-06-17 Thread Tim Ward via osgi-dev
Hi Michael,

I’m afraid that there’s quite a lot of missing information before I could come 
to a conclusion about what’s going on. What are your input requirements? Have 
you checked that B actually has the version range that you think it does? Are 
there two versions of A being deployed? If it’s possible to share the workspace 
then we might be able to bottom out what’s happening.

Also, if 2.0.1 of A is known to be broken then why do you have it in the 
repository that you are resolving against? The best defence against “bad” 
resolutions is to have a well curated repository. As with many things garbage 
in == garbage out.

All the best,

Tim

> On 17 Jun 2019, at 11:14, Michael Lipp via osgi-dev  
> wrote:
> 
> Hi,
> 
> I have in my repository a bundle A-2.0.1 that exports packages with
> version 2.0.1 and a bundle A-2.0.3 that exports these packages with
> version 2.0.3. Version A-2.0.3 fixes a bug.
> 
> I have a bundle B that imports the packages from A with import
> statements "... version=[2.0.3,3)" because the bug fix is crucial for
> the proper working of B.
> 
> Clicking on "Resolve" in bndtools, I get a resolution with bundle
> A-2.0.1. I understand that this complies with the specification ("It is
> recommended to ignore the micro part of the version because systems tend
> to become very rigid if they require the latest bug fix to be deployed
> all the time.").
> 
> What I don't understand is the rationale. I don't see any drawbacks in
> deploying the latest bug fix. Of course, there's always the risk of
> introducing a new bug with a new version, even if it is supposed to only
> fix a bug in the previous version. But if you're afraid of this, you may
> also not allow imports with version ranges such as "[1.0,2)" (for
> consumers).
> 
> In my case, I now have to distribute bundle B with a release note to
> configure the resolution in such a way that only A 2.0.3 and up is used.
> Something that you would expect to happen automatically looking at the
> import statement. And if I want to make sure that the release note is
> not overlooked, the only way seems to be to check the version of "A" at
> run-time in the activation of "B". This is downright ugly.
> 
>  - Michael
> 
> 
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Multiple bind attempts to 8080

2019-03-25 Thread Tim Ward via osgi-dev
There are definitely two different versions of Jetty trying to start up in your 
runtime:

This one:

> 03/23/2019 09:41:58.645|INFO |main|Started 
> ServerConnector@77a074b4{HTTP/1.1,[http/1.1]}{0.0.0.0:8080 
> }
> 
> [INFO] Started Jetty 9.4.9.v20180320 at port(s) HTTP:8080 on context path / 
> [minThreads=8,maxThreads=200,acceptors=1,selectors=6]
> 

And this one:
> [DEBUG] Adding bundle org.apache.felix.http.jetty:4.0.6 (49) : starting
> 
> 03/23/2019 09:41:59.029|INFO |main|jetty-9.4.14.v20181114; built: 
> 2018-11-14T21:20:31.478Z; git: c4550056e785fb5665914545889f21dc136ad9e6; jvm 
> 1.8.0_192-b12
> 




> On 23 Mar 2019, at 14:34, Raymond Auge via osgi-dev  
> wrote:
> 
> 
> In the second case, could the runbundles contain duplicate felix jetty 
> bundles (like of different versions)?
> 
> - Ray
> 
> On Sat, Mar 23, 2019 at 9:58 AM jhrayburn--- via osgi-dev 
> mailto:osgi-dev@mail.osgi.org>> wrote:
> I have multiple sets of bundles for different capabilities.
> 
>  
> 
> AuthApp
> 
> AuthProvider
> 
> AuthProviderApi
> 
> AuthPersistence
> 
> AuthPersistenceApi
> 
> AuthRestService
> 
>  
> 
> ResApp
> 
> RestProvider
> 
> RestProviderApi
> 
> RestPersistence
> 
> RestPersistenceApi
> 
> ResRestService
> 
>  
> 
> The Auth bundles are independent of the Res bundles. When I Run the AuthApp,
> 
>  
> 
> -runfw: org.apache.felix.framework;version='[6.0.0,6.0.0]'
> 
> -runee: JavaSE-1.8
> 
> -runprovidedcapabilities: ${native_capability}
> 
>  
> 
> -resolve.effective: active
> 
>  
> 
> -runvm: -ea, -Xms10m, -Dlogback.configurationFile=resources/logback.xml
> 
>  
> 
> -runproperties: org.osgi.service.http.port=8080
> 
>  
> 
> -runrequires: \
> 
>bnd.identity;id='AuthApp'
> 
> ...
> 
>  
> 
> The runs without any issue and connection can be made to the rest endpoints
> 
>  
> 
> When I run the ResApp, with the following source
> 
>  
> 
> -runfw: org.apache.felix.framework;version='[6.0.0,6.0.0]'
> 
> -runee: JavaSE-1.8
> 
> -runprovidedcapabilities: ${native_capability}
> 
>  
> 
> -resolve.effective: active
> 
>  
> 
> -runvm: -ea, -Xms10m, -Dlogback.configurationFile=resources/logback.xml
> 
>  
> 
> -runproperties: org.osgi.service.http.port=8080
> 
>  
> 
> -runrequires: \
> 
>bnd.identity;id='ResApp',\
> 
>bnd.identity;id='AuthApp'
> 
> ...
> 
>  
> 
> The OSGi is bound to 0.0.0.0:8080  and listening. In 
> addition I am seeing the following lines and exception.
> 
>  
> 
> 03/23/2019 09:41:58.645|INFO |main|Started 
> ServerConnector@77a074b4{HTTP/1.1,[http/1.1]}{0.0.0.0:8080 
> }
> 
> [INFO] Started Jetty 9.4.9.v20180320 at port(s) HTTP:8080 on context path / 
> [minThreads=8,maxThreads=200,acceptors=1,selectors=6]
> 
> 03/23/2019 09:41:58.799|INFO |main|created whiteboard from configuration: 
> {service.pid=org.apache.aries.jax.rs.whiteboard.default}
> 
> Mar 23, 2019 9:41:58 AM org.apache.cxf.endpoint.ServerImpl initDestination
> 
> INFO: Setting the server's publish address to be /
> 
> [DEBUG] [ServiceReference 34 from bundle 35 : 
> org.apache.aries.jax.rs.whiteboard:1.0.1 
> ref=[org.osgi.service.http.context.ServletContextHelper] 
> properties={objectClass=[org.osgi.service.http.context.ServletContextHelper], 
> original.service.bundleid=35, original.service.id 
> =33, osgi.http.whiteboard.context.name 
> =default, 
> osgi.http.whiteboard.context.path=, 
> osgi.http.whiteboard.target=(osgi.http.endpoint=*), 
> osgi.jaxrs.application.base=/, osgi.jaxrs.name 
> =.default, 
> osgi.jaxrs.whiteboard.target=(service.pid=org.apache.aries.jax.rs.whiteboard.default),
>  service.bundleid=35, service.id =34, 
> service.pid=org.apache.aries.jax.rs.whiteboard.default, 
> service.ranking=-2147483648, service.scope=singleton}] Ignoring invalid 
> ServletContextHelper service
> 
> [DEBUG] Adding bundle org.apache.felix.http.jetty:4.0.0 (48) : active
> 
> [DEBUG] Adding bundle org.apache.felix.http.jetty:4.0.6 (49) : starting
> 
> 03/23/2019 09:41:59.029|INFO |main|jetty-9.4.14.v20181114; built: 
> 2018-11-14T21:20:31.478Z; git: c4550056e785fb5665914545889f21dc136ad9e6; jvm 
> 1.8.0_192-b12
> 
> 03/23/2019 09:41:59.031|INFO |main|DefaultSessionIdManager workerName=node0
> 
> 03/23/2019 09:41:59.032|INFO |main|No SessionScavenger set, using defaults
> 
> 03/23/2019 09:41:59.032|INFO |main|node0 Scavenging every 66ms
> 
> 03/23/2019 09:41:59.036|INFO |main|Started 
> o.e.j.s.ServletContextHandler@2a39aa2b{/,null,AVAILABLE}
> 
> 03/23/2019 09:41:59.036|INFO |main|Started @2828ms
> 
> 03/23/2019 09:41:59.036|INFO |main|node0 Scavenging every 66ms
> 
> [ERROR] Failed to start Connector: 
> ServerConnector@3b7eac14{HTTP/1.1,[http/1.1]}{0.0.0.0:8080 
> }
> 
> java.io.IOException: Failed to bind to 0.0.0.0/0.0.0.0:8080 
> 
>at 
> 

Re: [osgi-dev] Removing queued events in Push Steams

2019-03-16 Thread Tim Ward via osgi-dev
Hi - I’m afraid that I’m out on vacation this week, so I can’t put quite as 
much into my reply as I normally would. 

It sounds like you have a PushStream pipeline with multiple buffered stages. 
This isn’t a problem, and can be a good thing, but as you’re noticing it is 
leading to the following behaviour:

• The event gets published to the PushStream
• The event is queued into the initial buffer
• The event is dequeued from the buffer by a worker thread which starts pushing 
the event through the pipeline
• The event hits the window stage, which puts the event into the “window” and 
returns zero back pressure. 
• The worker thread returns and is immediately ready to process more events 
from the buffer

At some point later the window ends

• The PushStream worker thread is given the job of pushing the window through 
the rest of the pipeline.

Effectively the reason that your Queue Policy isn’t seeing any events is that 
there is no queue - the stages up to the window are able to easily keep pace 
with the event arrival rate, probably because the pipeline is short and they 
have nothing to do. 

In this situation you therefore need to look elsewhere for your control because 
it’s not a queueing problem. 

Probably the correct outcome would be to have a stream set up like so:

———

Stage 1: “split” the PushStream into two, one for high priority events (we’ll 
call this stream A), one for other events (we’ll call this Stream B). 

Stage 2 (A): wrap the event in a singleton list so that it’s type compatible 
with merging later

Stage 2 (B): window the events as you were doing before

Stage 3: Merge streams A and B back into each other. This will give you a 
stream of lists of events where high priority events bypass the window

Stage 4+: add a filter and/or mapping stage to remove low priority events that 
have been invalidated by the high priority event. 

———

I say that this will “probably” work for you because there may be other things 
that you want to do in the high priority event case. 

Certainly worth a try :)

Tim

Sent from my iPhone

> On 15 Mar 2019, at 19:20, Willy Montes  
> wrote:
> 
> Hello,
> 
> We tried using the QueuePolicy to accomplish our priority/merge management 
> but it didn't work.
> 
> Our main goal is to process the Diagram update events in background to avoid 
> slowing down other processes. We set up a first pushStream were we publish 
> the low priority events and we handle them using the PushStream#window 
> consumer to process them in bulk.
> 
> Every time a high priority Diagram Event comes, we want to remove all low 
> priority events for the same Diagram from the first stream, merge them, and 
> process them right away. We wanted to do that with a second pushStream 
> dedicated for handling high priority events.
> 
> Why 2 pushStreams, because each one is configured with a different executor, 
> particularly the one handling low priority events will use daemons, low 
> priority threads, while the other won't.
> 
> The QueuePolicy didn't work, because every time the QueuePolicy#doOffer 
> method is called, the queue passed as parameter was empty. Using custom 
> queues, we were able to see that right after an event was queued, it was 
> polled and put in some other buffer. In that way, when the next event 
> arrived, the queue was again empty. This won't allow any logic in the policy 
> to modify the order/priority of the events.
> 
> We tried using custom PriorityQueues as buffers for the pushStream, but the 
> same issue happens.
>  
> Having an additional parallel structures referencing the objects published in 
> the streams beats the purpose for us.
> 
> Not sure if there is any caveat around the PushStream#window consumer. We are 
> still wondering if we are on the right track by using streams for this. 
> 
> Anyone has any insight here?
> 
> Thanks,
> 
> Willy Montes.
> 
>> On Wed, Feb 27, 2019 at 9:55 AM Alain Picard  wrote:
>> Willy,
>> 
>> Re-asked and got some answers this morning. IMHO the Pushstream QueuePolicy 
>> looks the most promising. It seems to fit right in. You would then need only 
>> 1 push steam and not 2 and just control the queuing "offer" 
>> 
>> Alain
>> 
>> pic...@castortech.com
>> www.castortech.com
>> 
>> Forwarded Conversation
>> Subject: Removing queued events in Push Steams
>> 
>> 
>> From: Alain Picard 
>> Date: Tue, Feb 5, 2019 at 1:28 PM
>> To: OSGi Developer Mail List 
>> 
>> 
>> Hi,
>> 
>> We have cases where we need to process events with different priorities, and 
>> such priority can change after the initial event having been queued, but not 
>> yet processed.
>> 
>> For example, when there is an event that some content has changed, we 
>> subscribe to this event and based on some conditions this might trigger the 
>> need to update some diagrams in our case. This is considered a "background 
>> priority" event, since we simply want to get it updated when we have some 
>> cycles so as not being stuck 

Re: [osgi-dev] objectClass=org.osgi.service.component.ComponentFactory missing

2019-03-08 Thread Tim Ward via osgi-dev
This looks like a missing feature in bnd. When bnd spots that you have created 
a factory component in your bundle then it should mark the service as 
registered.

As a workaround you should be able to add a Capability annotation to the 
component in question marking that a ComponentFactory service will be created 
for it.

Best Regards,

Tim

> On 8 Mar 2019, at 09:20, Paul F Fraser via osgi-dev  
> wrote:
> 
> Hi,
> 
> The following resolve fail is occuring in my enRoute application bndrun, but 
> I think this service should be provided by felx scr (2.1.14)
> 
> Resolution failed. Capabilities satisfying the following requirements could 
> not be found:
> [<>]
>   ? osgi.identity: (osgi.identity=net.qnenet.site.qMainSiteImpl)
>   ? [net.qnenet.site.qMainSiteImpl version=0.0.1.201903080428]
>   ? osgi.service: 
> (objectClass=org.osgi.service.component.ComponentFactory)
> 
> Should I be looking somewhere else?
> 
> Paul Fraser
> 
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev


Re: [osgi-dev] Checking index project APP for rebuild

2019-03-06 Thread Tim Ward via osgi-dev


> On 6 Mar 2019, at 10:56, Paul F Fraser  wrote:
> 
> Hi Tim,
> 
> On 6/03/2019 8:41 pm, Tim Ward wrote:
>> Hi Paul,
>> 
>> Firstly - this sounds like something for the Bndtools list rather than the 
>> OSGi developer list, in that your asking a question about the Bndtools M2E 
>> integration, rather than a question about OSGi enRoute.
> Not sure I understand this point, I am working with all enRoute archetypes in 
> what I presume is a pure enRoute approach.

The enRoute archetypes are designed to be IDE agnostic, and so can be used in 
lots of different ways. If you were seeing the same thing happening across 
Eclipse, IntelliJ, and the command line then this definitely indicates an issue 
somewhere in the templates or project structure. In your email, however, you 
give your environment as "Eclipse 2018-12, bndtools 4.3.0.DEV” and your stated 
problem is "it is spending a long time in "Checking index project APP for 
rebuild””. These are clear indications that you are using Bndtools to develop 
your application (which is fine) and that therefore it is possible that issues 
you encounter may not be because of enRoute, but because of Eclipse or Bndtools.

In this case "Checking index project APP for rebuild” is an action being taken 
by Bndtools as part of the Eclipse Incremental build (that string occurs 
nowhere in OSGi enRoute). There is nothing that OSGi enRoute can do to stop 
this from happening, so it can only be fixed on the Bndtools side. It is also 
the case that other projects not using enRoute (but still using Maven) could 
trigger the same issue.

>> That said, “a long time” isn’t hugely clear - are we talking seconds or 
>> minutes? Seconds is probably normal, minutes probably isn’t. If it is 
>> minutes then is this every time, or is it only when you force updates? If 
>> you’re forcing a dependency update this will make Maven look for your 
>> dependencies in remote repos again, which will definitely add time.
> 
> Just running a "Maven Update Projects" on all modules, the progress messages 
> took over 1Hr to finish.

This is definitely too long unless you’re on some sort of dial-up access. The 
Bndtools team is the right place to ask for help.

> 
> Running Maven Install on all modules give a 100% BUILD SUCCESS but I m 
> getting a "Failed to execute goal on project " and it mentions in the message 
> that it could not resolve some of the artifacts.
> 
> But even before I had this resolve message it was still taking minutes.
> 
> In a previous discussion on this list you stated that you have never 
> experienced this type of problem, so all I can do is recheck all of my maven 
> work and try to isolate the problem.
> 
> Paul
> 

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Checking index project APP for rebuild

2019-03-06 Thread Tim Ward via osgi-dev
Hi Paul,

Firstly - this sounds like something for the Bndtools list rather than the OSGi 
developer list, in that your asking a question about the Bndtools M2E 
integration, rather than a question about OSGi enRoute.

That said, “a long time” isn’t hugely clear - are we talking seconds or 
minutes? Seconds is probably normal, minutes probably isn’t. If it is minutes 
then is this every time, or is it only when you force updates? If you’re 
forcing a dependency update this will make Maven look for your dependencies in 
remote repos again, which will definitely add time.

Another thing to be aware of is that a lot of build tasks in Eclipse don’t add 
a message, so if the successor/children to "Checking index project APP for 
rebuild” don’t add a message it may look like the check is taking a long time, 
even though what it’s actually doing is something else.

If you’re after more detailed analysis, or actually looking for a fix then I 
can recommend raising this with the Bndtools list.

Tim

> On 6 Mar 2019, at 05:11, Paul F Fraser via osgi-dev  
> wrote:
> 
> With about 30 modules in an enRoute project it is spending a long time in 
> "Checking index project APP for rebuild"
> 
> APP is the name of the project application.
> 
> Is this normal or have I missed a setting somewhere?
> 
> Eclipse 2018-12, bndtools 4.3.0.DEV
> 
> Paul Fraser
> 
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Removing queued events in Push Steams

2019-02-27 Thread Tim Ward via osgi-dev
Another option would be for you to take control of the queuing using a 
QueuePolicy. That would enable you to insert work at the head of the buffer (or 
at least higher up) if it was more important. You could also remove some of the 
entries if they are invalidated by the higher priority insert.

I await the results of your research with interest :)

Best Regards,

Tim

> On 27 Feb 2019, at 12:45, Peter Kriens via osgi-dev  
> wrote:
> 
> I probably would use a (static?) priority set with a weak reference to the 
> event object. (Or some key that uniquely identifies that object). The 
> processor can then consult this set to see if the event has higher priority. 
> A weak reference is needed to make sure that no events remain in this 
> priority set without locking.  
> 
> PK
> 
>> On 27 Feb 2019, at 12:49, Alain Picard via osgi-dev > > wrote:
>> 
>> Anyone has any insight here?
>> Alain
>> 
>> On Tue, Feb 5, 2019 at 1:28 PM Alain Picard > > wrote:
>> Hi,
>> 
>> We have cases where we need to process events with different priorities, and 
>> such priority can change after the initial event having been queued, but not 
>> yet processed.
>> 
>> For example, when there is an event that some content has changed, we 
>> subscribe to this event and based on some conditions this might trigger the 
>> need to update some diagrams in our case. This is considered a "background 
>> priority" event, since we simply want to get it updated when we have some 
>> cycles so as not being stuck doing it whenever someone requests such diagram 
>> to view/edit it.
>> 
>> We also have events when someone for example requests to open such a 
>> diagram, where we need to determine if it is up to date, and if it needs to 
>> be updated, to get this pushed and processed as quickly as possible, as the 
>> user is waiting.
>> 
>> So far we have setup 2 different push streams to support this. 
>> 
>> The issue here is that while this is high-priority event comes in, we need 
>> to make sure that we can cancel any similar queued events from the low 
>> priority stream, and possibly let it proceed if it is already being 
>> processed.
>> 
>> What is the best approach here ? Are we on the right track to start with?
>> 
>> Thanks
>> Alain
>> 
>> ___
>> OSGi Developer Mail List
>> osgi-dev@mail.osgi.org 
>> https://mail.osgi.org/mailman/listinfo/osgi-dev
> 
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] SCR API

2019-02-22 Thread Tim Ward via osgi-dev
Hi Thomas,

I’m not sure that there is a naming issue here, but possibly a different 
misunderstanding.

> From my understanding there are two kinds of "services”:

This is not really accurate. There is only one kind of service in OSGi, and 
it’s an object which has been registered with the service registry. It is 
always registered by a call to context.registerService(...). It doesn’t matter 
whether this action is taken directly by your bundle, or on your bundle’s 
behalf by an extender bundle. The fact that all services are the same is 
important because this is how bundles interoperate. Your bundles can use any 
mechanisms that they like internally and they can still interact with other 
bundles that may be using the same, or different, internal details.

> 2) "Fancy services" aka DS managed by a SCR. I specify those declaratively 
> via annotations, they have a lifecycle and can have references to other 
> services/components via annotations like @Activate/@Deactivate (Lifecycle) 
> and @Reference. Those I will call components for the rest of this mail.


There are a large number of different component models around in Java. These 
provide lifecycle management and injection for your objects. In the case of 
OSGi aware component models they also provide support for registering the 
object instance as a service (using context.registerService()).

For Declarative Services the programming and configuration model is declarative 
- you describe how you want your component’s lifecycle to look using XML (or 
annotations to generate the XML). This XML is packaged into your bundle and 
used by a Service Component Runtime (an implementation of the DS specification) 
at runtime to find and manage your component.

> Components on the other side have references and lifecycle methods, but in 
> order to instantiate them programmatically I have to force a developer to 
> annotate the class with @Component(scope=ServiceScope.PROTOTYPE) and then use 
> ServiceObjects#getService() to instantiate/register it.
> 
> This procedure can be error-prone, e.g., when I assume that scope is always 
> PROTOTYPE but the developer forgot to set it to this value. This problem came 
> up during my discussion with Vaadin for a Flow-OSGi integration.

This is really a decision that needs to be made by the component developer. It 
doesn’t always make sense for a component to be PROTOTYPE scoped, and they need 
to be aware that there may be multiple instances created

> In this context it would be great if there were a possibility to 
> programmatically create components (not services) where I can tell SCR what 
> fields/methods have to be treated as @Reference or lifecycle methods and let 
> SCR do the heavy lifting.

This is not what SCR does. SCR is a declarative component model, not a 
programmatic one. There is no “DS builder API” for creating components. If you 
want a builder API for creating a component then you need to use a component 
framework that works in this way. As Ray pointed out in a previous mail chain 
there is the Apache Aries Component DSL. You could also use Apache Felix 
Dependency Manager.


> Would such an API make sense? Or would it even be possible?

> I think in general this would be very useful in order to create OSGi 
> integrations for third-party libs that need to interact with DS in OSGi. 

This isn’t really a question of third party libraries interacting with DS, it’s 
a request for a radically different component model with some DS-like 
capabilities. There are already framework implementations in the world that 
provide what you’re looking for, just not using DS.

Best Regards,

Tim


> On 21 Feb 2019, at 18:07, Thomas Driessen via osgi-dev 
>  wrote:
> 
> Hi BJ,
> 
> sorry for being imprecise. I sometimes get confused regarding the proper 
> naming in OSGi. I will try to clear things up by defining what my 
> understanding is about services:
> 
> From my understanding there are two kinds of "services":
> 1) "Old school Services": I usually register those programmatically via 
> context.registerService(...). Those services are just POJOs with a little bit 
> metadata, i.e., properties, and a well-defined interface. Those I will call 
> services for the rest of this mail.
> 2) "Fancy services" aka DS managed by a SCR. I specify those declaratively 
> via annotations, they have a lifecycle and can have references to other 
> services/components via annotations like @Activate/@Deactivate (Lifecycle) 
> and @Reference. Those I will call components for the rest of this mail.
> 
> The difference between both (as far as I understand it) is that services can 
> be instantiated and registered programmatically, but are not managed by SCR 
> and therefore have no references and lifecycle methods. 
> 
> Components on the other side have references and lifecycle methods, but in 
> order to instantiate them programmatically I have to force a developer to 
> annotate the class with 

Re: [osgi-dev] ComponentServiceObjects vs ServiceObjects

2019-02-19 Thread Tim Ward via osgi-dev
> As I get it ComponentServiceObjects are just for use within Components and 
> are obtained vie @Reference?

Yes - you should *always* use ComponentServiceObjects if you want to get 
on-demand instances of a prototype scope service inside your DS component.

> But why are not ServiceObjects used for this?

ServiceObjects is a low-level API. You are responsible for ensuring that 
*every* call to get is balanced by a call to unget. This gives your component a 
big tidy-up job in its deactivate method (and other places as necessary). If 
you use ComponentServiceObjects then DS is able to track all of the instances 
that you’ve created and make sure that they all get released automatically if 
your component is deactivated or the reference is rebound (for example with a 
greedy dynamic mandatory reference).

ComponentServiceObjects is therefore much safer to use than ServiceObjects as 
you don’t need to be worried about the mess that could be ongoing when your 
component stops. Note that if your component can create a potentially unlimited 
number of service instances from the ComponentServiceObjects you must still 
make sure to release instances that you create. If you fail to do this you will 
run out of memory.

I hope this helps.

Tim

> On 19 Feb 2019, at 13:53, Thomas Driessen via osgi-dev 
>  wrote:
> 
> Hi,
> 
> can someone explain to me the difference between ComponentServiceObjects and 
> ServiceObjects and when to use which?
> 
> As I get it ComponentServiceObjects are just for use within Components and 
> are obtained vie @Reference? But why are not ServiceObjects used for this?
> 
> Kind regards,
> Thomas
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org 
> https://mail.osgi.org/mailman/listinfo/osgi-dev 
> 
___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Clarification on reference behavior regarding field injection

2019-02-18 Thread Tim Ward via osgi-dev
Hi, 

The build works fine for me in GitPod. You probably have corrupt things in your 
local repo, either delete them or force an update of snapshots (-U on the 
command line). Note that you also get a nice error message for the case that 
you claim didn’t work:

[ERROR] Failed to execute goal 
biz.aQute.bnd:bnd-maven-plugin:4.2.0-SNAPSHOT:bnd-process (default) on project 
impl: In component io.jatoms.osgi.refs.ComponentImpl, field tests policy is 
'dynamic' and field is not volatile. -> [Help 1]

Best Regards,

Tim

> On 18 Feb 2019, at 13:45, Thomas Driessen  
> wrote:
> 
> Hi Tim,
> 
> I added this plugin repository and change the bnd.version property to 
> 4.2.0-SNAPSHOT, but still get errors during the build with eroor message "No 
> plugin found for prefix 'bnd-indexer' ..." 
> Maven complains about several metadata being invalid, e.g.:
> 
> The metadata 
> /home/gitpod/.m2/repository/biz/aQute/bnd/bnd-maven-plugin/4.2.0-SNAPSHOT/maven-metadata-Bnd
>  Snapshots.xml is invalid
> The metadata 
> /home/gitpod/.m2/repository/biz/aQute/bnd/bnd-maven-plugin/4.2.0-SNAPSHOT/maven-metadata-Bnd
>  Snapshots.xml is invalid
> The POM for biz.aQute.bnd:bnd-maven-plugin:jar:4.2.0-20190215.224729-106 is 
> invalid
> ...
> 
> You can have a look at the workspace and try it for yourself if you want: 
> https://gitpod.io#snapshot/9f4e97ad-0dd3-4360-8702-de0fedf9e51f 
>  
> 
> Just type "resolve app" in the command line.
> 
> Kind regards,
> Thomas
> 
> 
> 
> -- Originalnachricht --
> Von: "Tim Ward" mailto:tim.w...@paremus.com>>
> An: "Thomas Driessen"  >
> Cc: "Neil Bartlett" mailto:njbartl...@gmail.com>>; 
> "OSGi Developer Mail List"  >
> Gesendet: 18.02.2019 10:53:23
> Betreff: Re: [osgi-dev] Clarification on reference behavior regarding field 
> injection
> 
>>> When I change the version number of bnd to 4.2.0 in the standard enRoute 
>>> setup, then some of the bnd-plugins can not be found by maven.
>> 
>> That’s because and 4.2.0 isn’t released yet - you need to add a plugin 
>> repository containing 4.2.0-SNAPSHOT (which is also the version you should 
>> be using. If you look at the readme for the bnd repo on GitHub you can see 
>> the repo url is https://bndtools.jfrog.io/bndtools/libs-snapshot/ 
>>  
>> 
>> Tim
>> 
>>> On 18 Feb 2019, at 10:51, Thomas Driessen >> > wrote:
>>> 
>>> Hi Neil,
>>> 
>>> thanks for this small history lesson on OSGi. You live, you learn. The why 
>>> is in most cases so much more interesting than the what  :)
>>> 
>>> I will open an issue on Github then for bnd. 
>>> 
>>> Kind regards,
>>> Thomas
>>> 
>>> -- Originalnachricht --
>>> Von: "Neil Bartlett" mailto:njbartl...@gmail.com>>
>>> An: "Thomas Driessen" >> >; "OSGi Developer Mail List" 
>>> mailto:osgi-dev@mail.osgi.org>>
>>> Cc: "Tim Ward" mailto:tim.w...@paremus.com>>
>>> Gesendet: 18.02.2019 10:44:14
>>> Betreff: Re: [osgi-dev] Clarification on reference behavior regarding field 
>>> injection
>>> 
 
 
 On Mon, Feb 18, 2019 at 9:39 AM Thomas Driessen via osgi-dev 
 mailto:osgi-dev@mail.osgi.org>> wrote:
 Hi Tim,
 
 as always thanks for your super detailed answer! 
 
 Just two more questions on this topic:
 
 1) Out of curiosity: do you know why the decision was made to make the 
 default for @Reference List static, reluctant? For 0/1..1 this 
 makes sense to me, but for me an expected default behavior for 0/1..n 
 references would have been dynamic, greedy, so that I end up with some 
 services isntead of probably none.
 
 The default is reluctant because "greedy" did not exist until DS 1.2. If 
 the default had been made greedy at that time, components coded before DS 
 1.2 would have seen a substantial change in behaviour that could be 
 considered non-backwards-compatible.
 
 Static has *always* been the default over dynamic, because dynamic forces 
 the developer to understand thread safety and to code the component much 
 more cautiously. Static is simple and safe.
  
 
 2) The observed Exception for optional/dynamic/reluctant: Is it intended? 
 I tried to switch to bnd 4.2.0 to see if this exxception occurs there too, 
 but was unable to do so. When I change the version number of bnd to 4.2.0 
 in the standard enRoute setup, then some of the bnd-plugins can not be 
 found by maven.
 
 Probably not expected, this smells like a bug.
  
 
 Kind regards,
 Thomas
 
 -- Originalnachricht --
 Von: "Tim Ward" mailto:tim.w...@paremus.com>>
 An: "Thomas Driessen" >>> >; "OSGi Developer Mail List" 
 mailto:osgi-dev@mail.osgi.org>>
 Cc: "Raymond 

Re: [osgi-dev] Clarification on reference behavior regarding field injection

2019-02-18 Thread Tim Ward via osgi-dev
> When I change the version number of bnd to 4.2.0 in the standard enRoute 
> setup, then some of the bnd-plugins can not be found by maven.

That’s because and 4.2.0 isn’t released yet - you need to add a plugin 
repository containing 4.2.0-SNAPSHOT (which is also the version you should be 
using. If you look at the readme for the bnd repo on GitHub you can see the 
repo url is https://bndtools.jfrog.io/bndtools/libs-snapshot/ 
 

Tim

> On 18 Feb 2019, at 10:51, Thomas Driessen  
> wrote:
> 
> Hi Neil,
> 
> thanks for this small history lesson on OSGi. You live, you learn. The why is 
> in most cases so much more interesting than the what  :)
> 
> I will open an issue on Github then for bnd. 
> 
> Kind regards,
> Thomas
> 
> -- Originalnachricht --
> Von: "Neil Bartlett" mailto:njbartl...@gmail.com>>
> An: "Thomas Driessen"  >; "OSGi Developer Mail List" 
> mailto:osgi-dev@mail.osgi.org>>
> Cc: "Tim Ward" mailto:tim.w...@paremus.com>>
> Gesendet: 18.02.2019 10:44:14
> Betreff: Re: [osgi-dev] Clarification on reference behavior regarding field 
> injection
> 
>> 
>> 
>> On Mon, Feb 18, 2019 at 9:39 AM Thomas Driessen via osgi-dev 
>> mailto:osgi-dev@mail.osgi.org>> wrote:
>> Hi Tim,
>> 
>> as always thanks for your super detailed answer! 
>> 
>> Just two more questions on this topic:
>> 
>> 1) Out of curiosity: do you know why the decision was made to make the 
>> default for @Reference List static, reluctant? For 0/1..1 this makes 
>> sense to me, but for me an expected default behavior for 0/1..n references 
>> would have been dynamic, greedy, so that I end up with some services isntead 
>> of probably none.
>> 
>> The default is reluctant because "greedy" did not exist until DS 1.2. If the 
>> default had been made greedy at that time, components coded before DS 1.2 
>> would have seen a substantial change in behaviour that could be considered 
>> non-backwards-compatible.
>> 
>> Static has *always* been the default over dynamic, because dynamic forces 
>> the developer to understand thread safety and to code the component much 
>> more cautiously. Static is simple and safe.
>>  
>> 
>> 2) The observed Exception for optional/dynamic/reluctant: Is it intended? I 
>> tried to switch to bnd 4.2.0 to see if this exxception occurs there too, but 
>> was unable to do so. When I change the version number of bnd to 4.2.0 in the 
>> standard enRoute setup, then some of the bnd-plugins can not be found by 
>> maven.
>> 
>> Probably not expected, this smells like a bug.
>>  
>> 
>> Kind regards,
>> Thomas
>> 
>> -- Originalnachricht --
>> Von: "Tim Ward" mailto:tim.w...@paremus.com>>
>> An: "Thomas Driessen" > >; "OSGi Developer Mail List" 
>> mailto:osgi-dev@mail.osgi.org>>
>> Cc: "Raymond Auge" > >
>> Gesendet: 18.02.2019 10:15:54
>> Betreff: Re: [osgi-dev] Clarification on reference behavior regarding field 
>> injection
>> 
>>> Hi Thomas,
>>> 
>>> Just to clarify, the behaviour that you see with static reluctant services 
>>> will always look “odd” for cardinalities other than mandatory, and what 
>>> you’ve recorded is 100% correct behaviour.
>>> 
>>> Static references force the component to be re-created if the value of the 
>>> reference changes
>>> Reluctant references avoid rebinding the service unless it is required
>>> 
>>> Therefore:
>>> 
>>> An optional reference bound to nothing will never bind to anything again 
>>> (unless the component is re-created for another reason) because having zero 
>>> references is valid and you are reluctant to re-create the component 
>>> instance
>>> An optional reference bound to a service will not change until that service 
>>> is unregistered (ignoring all new services), at which point it will either:
>>> Pick up the highest ranked of any matching registered services
>>> Bind to nothing if no matching services are available
>>> A multiple reference bound to nothing will behave exactly like an optional 
>>> component
>>> A multiple reference bound to one or more services will not change until 
>>> any of the bound services are unregistered (ignoring all new services), at 
>>> which point it will either:
>>> Pick up all the available registered services
>>> Bind to nothing if no matching services are available
>>> An “at least one" reference bound to one or more services will not change 
>>> until any of the bound services are unregistered (ignoring all new 
>>> services), at which point it will either:
>>> Pick up all the available registered services
>>> Make the component unsatisfied
>>> 
>>> 
>>> The end result of this is that references that can accept zero will tend to 
>>> zero over time, and then tend to stay with zero bound references. At least 
>>> one references will tend to a small number of “stable” services with new 
>>> services ignored.
>>> 
>>> In general references of these 

Re: [osgi-dev] Clarification on reference behavior regarding field injection

2019-02-18 Thread Tim Ward via osgi-dev
Hi Thomas,

Just to clarify, the behaviour that you see with static reluctant services will 
always look “odd” for cardinalities other than mandatory, and what you’ve 
recorded is 100% correct behaviour.

Static references force the component to be re-created if the value of the 
reference changes
Reluctant references avoid rebinding the service unless it is required

Therefore:

An optional reference bound to nothing will never bind to anything again 
(unless the component is re-created for another reason) because having zero 
references is valid and you are reluctant to re-create the component instance
An optional reference bound to a service will not change until that service is 
unregistered (ignoring all new services), at which point it will either:
Pick up the highest ranked of any matching registered services
Bind to nothing if no matching services are available
A multiple reference bound to nothing will behave exactly like an optional 
component
A multiple reference bound to one or more services will not change until any of 
the bound services are unregistered (ignoring all new services), at which point 
it will either:
Pick up all the available registered services
Bind to nothing if no matching services are available
An “at least one" reference bound to one or more services will not change until 
any of the bound services are unregistered (ignoring all new services), at 
which point it will either:
Pick up all the available registered services
Make the component unsatisfied


The end result of this is that references that can accept zero will tend to 
zero over time, and then tend to stay with zero bound references. At least one 
references will tend to a small number of “stable” services with new services 
ignored.

In general references of these types should be dynamic or greedy (or both).

Best Regards,

Tim

> On 18 Feb 2019, at 09:38, Thomas Driessen via osgi-dev 
>  wrote:
> 
> Oh Sorry :/
> 
> Those combinations with cardinalities optional/mandatory have been assigned 
> to ITest, those with multiple/at_least_one have been assigned to List.
> 
> I didn't think it makes sense to assign them vice versa, e.g., ITest with 
> multiple or List with mandatory? Or am I wrong? If so, in what case 
> would you use such a combination?
> 
> Kind regards,
> Thomas
> 
> -- Originalnachricht --
> Von: "Raymond Auge"  >
> An: "Thomas Driessen"  >; "OSGi Developer Mail List" 
> mailto:osgi-dev@mail.osgi.org>>
> Gesendet: 16.02.2019 17:42:19
> Betreff: Re: [osgi-dev] Clarification on reference behavior regarding field 
> injection
> 
>> You're chart doesn't appear to list _which_ field (Reference) was associated 
>> with any given line (collection vs. scalar). It makes a difference.
>> 
>> - Ray
>> 
>> On Sat, Feb 16, 2019 at 9:15 AM Thomas Driessen via osgi-dev 
>> mailto:osgi-dev@mail.osgi.org>> wrote:
>> Hi,
>> 
>> I'm trying to get an overview over the effects of different combinations of 
>> cardinality, policy and policyOption within a @Reference annotation for a 
>> field.
>> 
>> My Example looks like this:
>> 
>> @Component 
>> public class MyComp{
>>   @Reference(...)
>>   ITest myTest;
>> 
>>   @Reference(...)
>>   List myTests;
>> }
>> 
>> and the observed behavior for this setup with different combinations of the 
>> above named properties is:
>> 
>> 
>> I'm especially interested in the yellow marked cases: Is this an intended 
>> behavior?
>> 
>> Kind regards,
>> Thomas
>> ___
>> OSGi Developer Mail List
>> osgi-dev@mail.osgi.org 
>> https://mail.osgi.org/mailman/listinfo/osgi-dev 
>> 
>> 
>> -- 
>> Raymond Augé  (@rotty3000)
>> Senior Software Architect Liferay, Inc.  (@Liferay)
>> Board Member & EEG Co-Chair, OSGi Alliance  (@OSGiAlliance)
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org 
> https://mail.osgi.org/mailman/listinfo/osgi-dev 
> 
___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Aliasing OSGi DS annotations

2019-02-14 Thread Tim Ward via osgi-dev
Hi,

There isn’t any sort of Aliasing for DS annotations.

Best Regards,

Tim

> On 14 Feb 2019, at 09:10, Thomas Driessen via osgi-dev 
>  wrote:
> 
> Hi,
> 
> is there a mechanism in OSGi that allows to alias specific (DS) annotations? 
> (Seems to be possible in Spring according to Vaadin) 
> 
> Currently Vaadin tries to implement an integration for OSGi and there is the 
> case where Users will need to annotate the class with OSGi's @Component 
> annotation that declares the service to be Vaadin's Component class. So one 
> of them would have to be written with the fqcn.
> 
> It's just an inconvenience, but if there's a way to work around that this 
> would be great :)
> 
> Kind regards,
> Thomas
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org 
> https://mail.osgi.org/mailman/listinfo/osgi-dev 
> 
___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Push Streams Event grouping and batching

2019-02-06 Thread Tim Ward via osgi-dev
So having dug further, the following is sufficient:

Promise>> result = psp.buildStream(sps)
.withExecutor(Executors.newFixedThreadPool(2))
.build()
.asyncMap(2, 0, new StreamDebouncer(
new 
PromiseFactory(PromiseFactory.inlineExecutor()), 
10, Duration.ofSeconds(1)))
.filter(c -> !c.isEmpty())
.collect(Collectors.toList());

Note the “withExecutor” call. The default executor has only one thread (because 
the default parallelism is 1), this is used to send events down the pipe. In 
this case the close event is sent and reaches the asyncMap operation. The close 
event then waits for the promises from the ongoing asyncMaps to complete.

In a separate thread the final debounce window is closed because it times out. 
This triggers the gathering of the batch of data and tries to send it on to the 
rest of the stream using the sending thread which is blocked trying to send the 
close event! This deadlocks because the data cannot be sent until the sender 
thread is available, and the sender thread is waiting for the data event to 
have been sent before continuing.

The simple fix is to provide a second thread in the executor. This does not 
result in events arriving out of order because the parallelism of the stream is 
still one. This deadlock is also only triggered by terminal events, which are 
the only kind that can block a sender thread, so no data is harmed. Also, any 
number of sender threads greater than one resolves the problem regardless of 
the applied parallelism.

I hope this helps,

Tim

> On 6 Feb 2019, at 14:57, Tim Ward  wrote:
> 
> Hi Alain,
> 
> Having applied some step by step debugging the issue I can see is actually a 
> deadlock in the async worker pool - we call asyncMap with a parallelism of 2, 
> but we use the default executor from the push stream, which has only one 
> thread.
> 
> If the timing is wrong then we end up with the final debounce window timing 
> out and trying to complete while the one pushing thread is blocking waiting 
> for the timeout to be sent.
> 
> I’ll dig a little further, but the probable answer is just that one of the 
> thread pools needs an extra thread (or possibly that the inline executor 
> needs to be changed).
> 
> Best Regards,
> 
> Tim
> 
>> On 6 Feb 2019, at 14:36, Alain Picard > > wrote:
>> 
>> Jurgen,
>> 
>> Thanks for the clarifications. I also did about the same change to match our 
>> standard stream implementation, and didn't get the issue, but I was too much 
>> in the dark to make any meaningful conclusions.
>> 
>> I was looking to modify the test to verify various scenarios. So I guess 
>> I'll make those changes and proceed from there.
>> 
>> Thanks
>> Alain
>> 
>> 
>> On Wed, Feb 6, 2019 at 9:21 AM Jürgen Albert > > wrote:
>> Creating the PushStream with an exclusive Executor fixes the Problem.
>> 
>> Promise>> result = psp.buildStream(sps)
>> .withBuffer(new ArrayBlockingQueue> Integer>>(32))
>> .withExecutor(Executors.newCachedThreadPool()).build()
>> .asyncMap(5, 0, new StreamDebouncer(
>> new PromiseFactory(PromiseFactory.inlineExecutor()), 
>> 10, Duration.ofSeconds(1)))
>> .filter(c -> !c.isEmpty())
>> .collect(Collectors.toList());
>> 
>> Am 06/02/2019 um 15:14 schrieb Jürgen Albert:
>>> Hi Alain,
>>> 
>>> the issue has a couple of reasons:
>>> 
>>> The Pushstream and eventsource have by default 2 queues with a size of 32 
>>> each. The default pushback policy is linear. This means, that when 10 
>>> events are in the buffer a pushback of 10 ms will be given to the 
>>> eventsource. This means, that the eventsource will wait this time, before 
>>> it sends the next event downstream. This default behaviour can cause long 
>>> processing times, especially when a lot of events are written in a for 
>>> loop. This means that the queues fill up very quick even if the actual 
>>> processing time of the code of yours is close to 0. Use e.g. the 
>>> ON_FULL_FIXED policy to get ride of this problem.
>>> 
>>> As far as I understand the bouncer, it waits a second, before it returns a 
>>> list, if no new events are coming. Thus a sleep time of less then a second 
>>> or even 1,5 (together with the pushback thing I described before) will keep 
>>> the stream busy longer then your sleep time for a batch. Thus all the 
>>> batches return when hitting the max size, except the last one. This waits 
>>> and for some threading reasons, the last deferred is blocked from 
>>> resolving, which in turn blocks eventsource close. If you add a small wait 
>>> before the close is called everything is fine. 

Re: [osgi-dev] Push Streams Event grouping and batching

2019-02-06 Thread Tim Ward via osgi-dev
Hi Alain,

Having applied some step by step debugging the issue I can see is actually a 
deadlock in the async worker pool - we call asyncMap with a parallelism of 2, 
but we use the default executor from the push stream, which has only one thread.

If the timing is wrong then we end up with the final debounce window timing out 
and trying to complete while the one pushing thread is blocking waiting for the 
timeout to be sent.

I’ll dig a little further, but the probable answer is just that one of the 
thread pools needs an extra thread (or possibly that the inline executor needs 
to be changed).

Best Regards,

Tim

> On 6 Feb 2019, at 14:36, Alain Picard  wrote:
> 
> Jurgen,
> 
> Thanks for the clarifications. I also did about the same change to match our 
> standard stream implementation, and didn't get the issue, but I was too much 
> in the dark to make any meaningful conclusions.
> 
> I was looking to modify the test to verify various scenarios. So I guess I'll 
> make those changes and proceed from there.
> 
> Thanks
> Alain
> 
> 
> On Wed, Feb 6, 2019 at 9:21 AM Jürgen Albert  > wrote:
> Creating the PushStream with an exclusive Executor fixes the Problem.
> 
> Promise>> result = psp.buildStream(sps)
> .withBuffer(new ArrayBlockingQueue Integer>>(32))
> .withExecutor(Executors.newCachedThreadPool()).build()
> .asyncMap(5, 0, new StreamDebouncer(
> new PromiseFactory(PromiseFactory.inlineExecutor()), 
> 10, Duration.ofSeconds(1)))
> .filter(c -> !c.isEmpty())
> .collect(Collectors.toList());
> 
> Am 06/02/2019 um 15:14 schrieb Jürgen Albert:
>> Hi Alain,
>> 
>> the issue has a couple of reasons:
>> 
>> The Pushstream and eventsource have by default 2 queues with a size of 32 
>> each. The default pushback policy is linear. This means, that when 10 events 
>> are in the buffer a pushback of 10 ms will be given to the eventsource. This 
>> means, that the eventsource will wait this time, before it sends the next 
>> event downstream. This default behaviour can cause long processing times, 
>> especially when a lot of events are written in a for loop. This means that 
>> the queues fill up very quick even if the actual processing time of the code 
>> of yours is close to 0. Use e.g. the ON_FULL_FIXED policy to get ride of 
>> this problem.
>> 
>> As far as I understand the bouncer, it waits a second, before it returns a 
>> list, if no new events are coming. Thus a sleep time of less then a second 
>> or even 1,5 (together with the pushback thing I described before) will keep 
>> the stream busy longer then your sleep time for a batch. Thus all the 
>> batches return when hitting the max size, except the last one. This waits 
>> and for some threading reasons, the last deferred is blocked from resolving, 
>> which in turn blocks eventsource close. If you add a small wait before the 
>> close is called everything is fine. 
>> 
>> The blocking issue is interesting non the less, but my experience is that 
>> these kind of tests are often harsher then reality.
>> 
>> Regards, 
>> 
>> Jürgen.
>> 
>> Am 05/02/2019 um 23:58 schrieb Alain Picard:
>>> Tim,
>>> 
>>> Finally got around to this debouncer, and I tested to change the sleep 
>>> time. When I set it to like 800 to 1500, it never completes after shoing 
>>> "Closing the Generator". At 500, I get a Queue full that I can understand. 
>>> So why the hang?
>>> 
>>> Alain
>>> 
>>> 
>>> 
>>> On Mon, Jan 7, 2019 at 8:11 AM Tim Ward >> > wrote:
>>> This use case is effectively a “debouncing” behaviour, which is possible to 
>>> implement with a little thought.
>>> 
>>> There are a couple of ways to attempt it. This one uses the asyncMap 
>>> operation to asynchronously gather the events until it either times out the 
>>> promise or it hits the maximum stream size. Note that you have to filter 
>>> out the “empty” lists that are used to resolve the promises which are being 
>>> aggregated into the window. The result of this is a window which starts on 
>>> the first event arrival and then buffers the events for a while. The next 
>>> window isn’t started until the next event
>>> 
>>> 
>>> Best Regards,
>>> 
>>> Tim
>>> 
>>> 
>>> @Test
>>> public void testWindow2() throws InvocationTargetException, 
>>> InterruptedException {
>>> 
>>> PushStreamProvider psp = new PushStreamProvider();
>>> 
>>> SimplePushEventSource sps = 
>>> psp.createSimpleEventSource(Integer.class);
>>> 
>>> Promise>> result = 
>>> psp.createStream(sps)
>>> .asyncMap(2, 0, new StreamDebouncer(
>>> new 
>>> PromiseFactory(PromiseFactory.inlineExecutor()), 
>>> 10, Duration.ofSeconds(1)))
>>> .filter(c -> 

Re: [osgi-dev] Remote service (thread) context properties?

2019-02-06 Thread Tim Ward via osgi-dev
Hi Bernd,

What you’re asking for isn’t a required part of the RSA standard, which means 
that providers don’t have to offer it. There is, however, room for it to exist 
within the standard.

OSGi Remote Services (and Remote Service Admin) define the concept of “intents” 
which are additional features or qualities of service that a distribution 
provider can offer. An example of an existing intent is the “osgi.async” 
intent. This intent is used to indicate that the distribution provider can 
handle asynchronous return types such as OSGi Promises and Java futures. OSGi 
services that need to be remoted can then require that the distribution 
provider offer the intent by advertising the “service.exported.intents” 
property in addition to the service.exported.interfaces property.

What you’re asking for is therefore an intent which provides security context 
along with the call. I’m not aware of any distribution provider that does this, 
but it would be possible for them to add it. If they did they should add an 
advertised intent indicating the support, and your service should then require 
the intent.

I’m not aware of any implementations that support security context flow in this 
way currently.

Best Regards,

Tim

> On 6 Feb 2019, at 13:13, Bernd Eckenfels via osgi-dev 
>  wrote:
> 
> I guess most is thread local, it would be good if extraction/marshaling and 
> transport and demarshalling/setting on both ends could be enhanced with 
> interceptors.
> 
> But maybe a provide specific interface is enough? Did you do it for Aeris RSA 
> Fastbin?
> 
> Gruss
> Bernd
> 
> Gruss
> Bernd
> --
> http://bernd.eckenfels.net
>  
> Von: Christian Schneider 
> Gesendet: Mittwoch, Februar 6, 2019 2:07 PM
> An: Bernd Eckenfels; OSGi Developer Mail List
> Betreff: Re: [osgi-dev] Remote service (thread) context properties?
>  
> JAAS is already standardised. So if the provider (like CXF SOAP or JAX-RS) 
> establishes a JAAS context on your thread then you can access it. I can 
> provide an example if you want.
> I think for open tracing there is also an API that can be used. 
> 
> I am not sure about the others like peer-address, audit, tenant and request 
> ids. 
> Do you have an idea how it can / should work in practice?
> 
> Christian
> 
> Am Mi., 6. Feb. 2019 um 03:08 Uhr schrieb Bernd Eckenfels via osgi-dev 
> mailto:osgi-dev@mail.osgi.org>>:
> When I use a Remote Service for distributed OSGi application I would like my 
> provider to be able to implicitly pass some thread context like tracing IDs 
> and also a user authorization token.
> 
> The OSGi compendium talks about implementation specific security based on 
> codesigning, but not on thread identity (JAAS Context). Was there any plan to 
> add something, like an interceptor mechanism?
> 
> Some of it could be implementation specific, but some form of portable 
> endpoint binding access would be nice, like peer-address, jaas-context, 
> opentracing-id, maybe audit, tenant and request-ids?
> 
> I can enrich my services with a Map for most of it, however 
> then there is no reliable way for the provider to add/ensure some of its 
> protocol header properties and it hides the business interface under removing 
> parameters.
> 
> Gruss
> Bernd
> --
> http://bernd.eckenfels.net 
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org 
> https://mail.osgi.org/mailman/listinfo/osgi-dev 
> 
> 
> -- 
> -- 
> Christian Schneider
> http://www.liquid-reality.de 
> 
> Computer Scientist
> http://www.adobe.com 
> 
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] enRoute Project archetype not working correctly?

2019-02-05 Thread Tim Ward via osgi-dev
Hi Thomas,

What was the command that you used to generate the folders? The Maven Archetype 
Plugin is useful, but the default version used by Maven is pretty old. The OSGi 
enRoute recommendation is that you use version 3.0.1 of the archetype plugin, 
as described in 
https://enroute.osgi.org/tutorial/020-tutorial_qs.html#project-setup 


It’s possible that if you used an old version of the archetype plugin that it 
failed to correctly rename the folder based on the template variable value.

Best Regards,

Tim

> On 4 Feb 2019, at 12:13, Thomas Driessen via osgi-dev 
>  wrote:
> 
> Hi,
> 
> I just used the enRoute project archetype and this one does generate two 
> subfolders named __app-artifactId__ and __impl-artifactId__.
> When I now import this project from git in another Eclipse Instance, Eclipse 
> complains about missing child modules example.app/example.impl  which are the 
> artifactIds I used for app and impl during project creation.
> 
> Shouldn't the folders be named according to the artifactIds instead of 
> __app-artifactId__ and __impl-artifactId__ ?
> 
> Kind regards,
> Thomas
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org 
> https://mail.osgi.org/mailman/listinfo/osgi-dev 
> 
___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Move from bnd workspace to maven (enroute) workspace

2019-01-31 Thread Tim Ward via osgi-dev
Hi Thomas,

The simple answer to your question is yes, however the more involved answer is 
that you probably shouldn’t. If you want to read up on ways to handle Maven 
dependency management then I can suggest looking at:

https://maven.apache.org/guides/introduction/introduction-to-dependency-mechanism.html
 


This will talk you through how dependencies can be inherited from the parent 
directly (not usually a good idea), how versions of common dependencies can be 
managed centrally in a parent (usually a good idea), and how to construct a 
Bill Of Materials (BOM) which you can use as an easy way to grab a bunch of 
dependencies in one go (much like OSGi enRoute does with its indexes).

As for running directly from the command line. There isn’t an enRoute or bnd 
plugin for that, the smarts are all in Bndtools I’m afraid.

Best Regards,

Tim

> On 31 Jan 2019, at 16:09, Thomas Driessen  
> wrote:
> 
> Hi Tim,
> 
> just to clarify (I'm not really used to maven yet):
> 
> If I want to define a dependency that is used by multiple sub modules, then I 
> MAY put this dependency in the root/parent pom. I also COULD put this 
> dependency in each of the sub module's poms which would have the same effect 
> as the aforementioned approach. I don't need to define the dependecnies in 
> both places. 
> 
> Is this correct?
> 
> 
> Regarding the running and reloading of applications in bndtools: I don't use 
> Eclipse, therefore I asked if there are maven commands that mimc bndtools' 
> behavior ;)
> 
> 
> Kind regards,
> Thomas
> 
> -- Originalnachricht --
> Von: "Tim Ward" mailto:tim.w...@paremus.com>>
> An: "Thomas Driessen"  >; "OSGi Developer Mail List" 
> mailto:osgi-dev@mail.osgi.org>>
> Gesendet: 31.01.2019 16:48:54
> Betreff: Re: [osgi-dev] Move from bnd workspace to maven (enroute) workspace
> 
>> Hi
>> 
>>> On 31 Jan 2019, at 15:22, Thomas Driessen via osgi-dev 
>>> mailto:osgi-dev@mail.osgi.org>> wrote:
>>> 
>>> Hi,
>>> 
>>> I'm currently trying to get used to the new enroute maven workspace layout 
>>> and now have some questions :)
>>> 
>>> 1)
>>> In a bnd workspace I had the central.xml file where I put all the 
>>> dependencies I wanted in my local maven bnd worspace repository. Where do I 
>>> put those dependencies now in the maven project workspace? In the 
>>> dependencies section of the root pom or rather in the dependencies section 
>>> of a specific module pom?
>> 
>> In this case you treat your dependencies just like you would in Maven. If 
>> the dependency is used across many modules then you might add it to the 
>> dependencyManagement section of the parent pom (to manage the version in a 
>> single place), but you will always reference a dependency in the module 
>> using it. There is nothing special about this (it really is just vanilla 
>> Maven).
>> 
>>> 
>>> 2)
>>> In a bnd workspace I added the buildtime dependencies of a bundle to its 
>>> bnd file. What's the best practice now in a maven workspace? Do I add those 
>>> build time dependencies in the module pom?
>> 
>> Again, this is a normal Maven build that follows the same rules as all the 
>> Maven examples you can find on the internet. Your module’s compile time and 
>> runtime dependencies should be included in its pom, with the appropriate 
>> scope.
>> 
>>> 
>>> 3)
>>> In Eclipse with bndtools installed and when using a bnd workspace layout I 
>>> am able to press the debug button of a bndrun file and everything is 
>>> perfectly integrated in the IDE. Additionally, when I change code of 
>>> bundles that are currently running in an osgi framework, then those are 
>>> rebuilt and redeployed on the fly.
>> 
>> If you do the same thing in your enRoute workspace you’ll get the same 
>> behaviour.
>> 
>>> 
>>> Is there a way to reproduce a similar behavior only with maven commands and 
>>> a remote debugger?
>> 
>> You can start your application with remote debug enabled (just using the 
>> normal JVM debug arguments as you describe below) but I would recommend that 
>> you just do the same launching that you’ve been doing from a bad workspace.
>> 
>>> 
>>> Right now I'm following the enroute tutorial and every time I changed 
>>> something in the code I type the following commands:
>>> 1 mvn -pl app -am bnd-indexer:index bnd-indexer:index@test-index 
>>> bnd-resolver:resolve package
>>> 2 java -jar -Xdebug 
>>> -Xrunjdwp:transport=dt_socket,address=8000,server=y,suspend=y 
>>> .\app\target\app.jar
>>> 3 Then I start my remote debugger to attach to the jvm 
>>> 
>>> Are there other maven commands that would me allow to skip step 2 and 3? 
>>> Something like mvn jetty:run for web apps?
>> 
>> There isn’t a Maven command for it, but if you look at the Eclipse version 
>> of the Running the Application 
>> 
>>  section in 

Re: [osgi-dev] Move from bnd workspace to maven (enroute) workspace

2019-01-31 Thread Tim Ward via osgi-dev
Hi

> On 31 Jan 2019, at 15:22, Thomas Driessen via osgi-dev 
>  wrote:
> 
> Hi,
> 
> I'm currently trying to get used to the new enroute maven workspace layout 
> and now have some questions :)
> 
> 1)
> In a bnd workspace I had the central.xml file where I put all the 
> dependencies I wanted in my local maven bnd worspace repository. Where do I 
> put those dependencies now in the maven project workspace? In the 
> dependencies section of the root pom or rather in the dependencies section of 
> a specific module pom?

In this case you treat your dependencies just like you would in Maven. If the 
dependency is used across many modules then you might add it to the 
dependencyManagement section of the parent pom (to manage the version in a 
single place), but you will always reference a dependency in the module using 
it. There is nothing special about this (it really is just vanilla Maven).

> 
> 2)
> In a bnd workspace I added the buildtime dependencies of a bundle to its bnd 
> file. What's the best practice now in a maven workspace? Do I add those build 
> time dependencies in the module pom?

Again, this is a normal Maven build that follows the same rules as all the 
Maven examples you can find on the internet. Your module’s compile time and 
runtime dependencies should be included in its pom, with the appropriate scope.

> 
> 3)
> In Eclipse with bndtools installed and when using a bnd workspace layout I am 
> able to press the debug button of a bndrun file and everything is perfectly 
> integrated in the IDE. Additionally, when I change code of bundles that are 
> currently running in an osgi framework, then those are rebuilt and redeployed 
> on the fly.

If you do the same thing in your enRoute workspace you’ll get the same 
behaviour.

> 
> Is there a way to reproduce a similar behavior only with maven commands and a 
> remote debugger?

You can start your application with remote debug enabled (just using the normal 
JVM debug arguments as you describe below) but I would recommend that you just 
do the same launching that you’ve been doing from a bad workspace.

> 
> Right now I'm following the enroute tutorial and every time I changed 
> something in the code I type the following commands:
> 1 mvn -pl app -am bnd-indexer:index bnd-indexer:index@test-index 
> bnd-resolver:resolve package
> 2 java -jar -Xdebug 
> -Xrunjdwp:transport=dt_socket,address=8000,server=y,suspend=y 
> .\app\target\app.jar
> 3 Then I start my remote debugger to attach to the jvm 
> 
> Are there other maven commands that would me allow to skip step 2 and 3? 
> Something like mvn jetty:run for web apps?

There isn’t a Maven command for it, but if you look at the Eclipse version of 
the Running the Application 

 section in the enRoute tutorials you can see how to run inside the IDE.

Best Regards,

Tim


> 
> Kind regards,
> Thomas
> 
> 
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org 
> https://mail.osgi.org/mailman/listinfo/osgi-dev 
> 
___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] ContainerRequestFilter/ContainerResponseFilter life cycle

2019-01-30 Thread Tim Ward via osgi-dev


> On 30 Jan 2019, at 16:52, Nhut Thai Le  wrote:
> 
> Tim, 
> 
> Thank you for quickly identify my problems :D, please see my comments inline
> 
> On Wed, Jan 30, 2019 at 10:13 AM Tim Ward  > wrote:
> Hi,
> 
> Firstly:
> 
> Why does DiagramRestApp do this?
> 
>>   @Override
>>   public Set getSingletons() {
>>   return Collections.singleton(this);
>>   }
> 
> That is just all kinds of wrong!
> i'm using aries-jaxrs-whiteboard for our REST api and i was was looking at 
> the test to see how to build the Application class, i think this is what i 
> was following: 
> https://github.com/apache/aries-jax-rs-whiteboard/blob/master/jax-rs.itests/src/main/java/test/types/TestApplication.java
>  
> .
>  I got some errors at that time without the getSingletons() so i left it 
> there. Now i just removed it and it seem fine.

In this case the TestApplication class is a resource class (it declares 
resource methods). It’s still horrible to do this, but it’s being done to 
reduce the number of types in the test.

> 
> Secondly, you’re using String properties everywhere rather than the 
> annotations. This is asking for trouble with typos and inconsistencies.
> You are right, i should change to annotations
> 
> Next:
> 
> The following:
> 
>> @Component(
>>  service = CommonDiagramRESTService.class,
>>  property = { 
>>  "osgi.jaxrs.extension=true", 
>>  "osgi.jaxrs.name 
>> =CommonDiagramRESTService", 
>>  "osgi.jaxrs.application.select=(osgi.jaxrs.name 
>> =DiagramRestApp)" 
>>  }
>> )
>> @Path("/common")
>> public final class CommonDiagramRESTService {..}
> 
> 
> and
> 
>> @Component(
>>  service = GridDiagramRESTService.class,
>>  property = { 
>>   "osgi.jaxrs.extension=true" , 
>>   "osgi.jaxrs.name 
>> =GridDiagramRESTService", 
>>   
>> "osgi.jaxrs.application.select=(osgi.jaxrs.name 
>> =DiagramRestApp)"   
>>  }
>> )
>> @Path("/grid")
>> public final class GridDiagramRESTService {...}
> 
> 
> These types declare that they are extensions - why? They don’t advertise any 
> extension interfaces. They should advertise themselves as resources (which is 
> what they are). I would not expect these to be picked up properly by the 
> whiteboard as they are not extensions and they don’t advertise being 
> resources.
> Sorry, this is my typo, I'm using const so i replaced the const with their 
> values here so u can see how those classes are related, somehow i replaced 
> the property osgi.jaxrs.resource=true by osgi.jaxrs.extension=true when 
> copying here
> 
> 
>> What I observe is that the filter got activated 25 times, although all the 
>> resource classes and the filter bind to the same jaxrs application. Is this 
>> normal?
> 
> Depending on the underlying implementation of the whiteboard and the 
> registration order of the services this may or may not happen. Some JAX-RS 
> implementations are not dynamic, in this case the whole application must be 
> bounced when a change occurs. This will result in the currently registered 
> services being discarded and re-created.
> 
> Best Regards,
> 
> Tim
> 
> 
>> On 30 Jan 2019, at 15:01, Nhut Thai Le > > wrote:
>> 
>> Tim,
>> 
>> I have one jaxrs application:
>> @Component(
>>  service = Application.class,
>>  property= {
>>  "osgi.jaxrs.name =DiagramRestApp",
>>  "osgi.jaxrs.application.base=/diagram/services/diagrams/rest"
>>  }
>> )
>> public final class DiagramRestApp extends Application {
>>   @Override
>>   public Set getSingletons() {
>>   return Collections.singleton(this);
>>   }
>> }
>> but multiple resource classes, about 25 of them, here is one
>> @Component(
>>  service = CommonDiagramRESTService.class,
>>  property = { 
>>  "osgi.jaxrs.extension=true", 
>>  "osgi.jaxrs.name 
>> =CommonDiagramRESTService", 
>>  "osgi.jaxrs.application.select=(osgi.jaxrs.name 
>> =DiagramRestApp)" 
>>  }
>> )
>> @Path("/common")
>> public final class CommonDiagramRESTService {..}
>> 
>> and another one:
>> @Component(
>>  service = GridDiagramRESTService.class,
>>  property = { 
>>   "osgi.jaxrs.extension=true" , 
>>   "osgi.jaxrs.name 
>> =GridDiagramRESTService", 
>>   
>> 

Re: [osgi-dev] Limks to R7 ionline spec

2019-01-30 Thread Tim Ward via osgi-dev
When I type "osgi specifications” into Google the first links I get are:

To the main specifications page you’re already at (this requires you to follow 
the download link for the version you want)
To the HTML version of the R7 core specification
To the HTML version of the R7 compendium specification
To the wikipedia page listing OSGi specification implementations

After that the signal to noise ratio drops pretty rapidly, but that top 4 feels 
like a pretty good hit rate to me.

If I type:

"OSGi  specifications” into Google then the top link is the HTML 
version of that specification.

Again, this feels like a pretty good hit rate to me, but if you have more ideas 
of things that could be done then please let us know. The intent is that these 
documents are easy to find!

Best Regards,

Tim

> On 30 Jan 2019, at 10:36, Paul F Fraser via osgi-dev  
> wrote:
> 
> Hi David,
> 
> That is fine if you know that.
> But searching with google it is almost impossible (or I find it almost 
> impossible)  to find the online vsersion or even know it exists unless you 
> stumble upon a link that takes you into the online version.
> 
> Paul
> 
> On 30/01/2019 9:00 pm, David Bosschaert wrote:
>> Hi Paul,
>> 
>> If I follow the Download links from there I end up at the actual page that 
>> contains the specs: https://www.osgi.org/release-7-1/ 
>> 
>> That page contains both the PDFs as well as the online ones...
>> 
>> Best regards,
>> 
>> David 
>> 
>> On Wed, 30 Jan 2019 at 00:00, Paul F Fraser via osgi-dev 
>> mailto:osgi-dev@mail.osgi.org>> wrote:
>> Hi,
>> 
>> Who or how do I tell someone that the page 
>> https://www.osgi.org/developer/specifications/ 
>>  needs 
>> links to the online spec.
>> 
>> The bugs and issues area  seems difficult to use without further study.
>> 
>> A low friction way to report these, relatively minor, but annoying aspects 
>> of the search for OSGi 
>> knowledge would be useful.
>> 
>> I suppose someone will suggest that I fix it, but this misses the point.
>> 
>> Paul Fraser
>> 
>> ___
>> OSGi Developer Mail List
>> osgi-dev@mail.osgi.org 
>> https://mail.osgi.org/mailman/listinfo/osgi-dev 
>> 
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Prototype scope components and Fluent builder

2019-01-30 Thread Tim Ward via osgi-dev
I’m not quite sure what this question means, but the most obvious 
interpretation is “how should I deal with registering a service that is a 
fluent builder?”. 

The JAX-RS Whiteboard does this with the ClientBuilder service 
. 
And the way in which it works is highly related to the rules of the builder. In 
the case of the ClientBuilder the builder returns this not a copy, therefore it 
is not possible for multiple components to share a single builder service. 
Therefore the builder must be registered and used as a prototype scope service.

If (and only if) your builder creates copies of itself each time a method on it 
is called then it is safe to use it as a singleton or bundle scoped service. 
This situation is comparatively rare.

I hope this helps.

Tim

> On 29 Jan 2019, at 16:39, Alain Picard via osgi-dev  
> wrote:
> 
> I am curious if there is a prescribed or suggested approach to use fluent 
> builder in conjunction with Fluent builders?
> 
> Cheers,
> Alain
> 
> 
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Loading xxxApp contents has encountererd NPE

2019-01-30 Thread Tim Ward via osgi-dev
Hi Paul,

It’s not normal to see NPEs during your build, but I also don’t see them when 
using OSGi enRoute. Assuming that you can identify the Maven plugin causing the 
problem it might be worth talking to the maintainers of the plugin. At the very 
least it’s hard to diagnose without more details, such as the stack trace and 
the log entries leading up to it.

Best Regards,

Tim

> On 30 Jan 2019, at 06:04, Paul F Fraser via osgi-dev  
> wrote:
> 
> Hi,
> 
> Now heavily into new enRoute, I am getting quite regularly (multiple) NPEs 
> during build. Does not seem to affect runtime.
> 
> Is this a normal situation or should I look for a problem in my code? Any 
> clues?
> 
> Paul Fraser
> 
> 
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] ContainerRequestFilter/ContainerResponseFilter life cycle

2019-01-29 Thread Tim Ward via osgi-dev
Hi,

As described in the JAX-RS Whiteboard spec 
 
JAX-RS extension instances are required to be singletons (I’m talking about the 
objects, not the services) by the JAX-RS specification itself. Therefore within 
an application you will only ever see one instance of a whiteboard extension 
service.

The reason that you should make extension services prototype is therefore not 
related to per-request handling, but instead because of what happens when the 
same extension service is applied to *multiple applications*. In this scenario 
you will have multiple applications with different configuration and different 
context objects. If your extension service is injected with these objects by 
the JAX-RS runtime (for example being injected with the Application using 
@Context) then which application is that for? In the general case you have no 
idea whether the context object you have been injected with relates to the 
current request or not.

If your extension service is singleton or bundle scoped then the JAX-RS 
whiteboard can only get one instance. It therefore has to use this same 
instance for all of the whiteboard applications and you run into the potential 
“multiple injections” trap. This is obviously fine if you don’t have any 
injection sites, or if all your injection sites are method parameters, but it 
is a risky thing to do as someone may add an injection site later without 
realising that they’ve broken things. This will probably also make it through 
testing as you’ll typically only have one application at a time when testing!

If your extension service is prototype scope then the JAX-RS Whiteboard is able 
to get a different instance to use in each whiteboard application. At this 
point you no longer need to worry about multiple injections because the 
injections happen on different instances.

I hope this answers your question, and helps to further explain why prototype 
scope is a good thing for filters!

Best Regards,

Tim

> On 29 Jan 2019, at 15:18, Raymond Auge via osgi-dev  
> wrote:
> 
> I'm going to assume you are talking about:
> 
> HttpService[1] or Http Whiteboard[2] w.r.t. the reference to Servlet
> AND
> JAX-RS Whiteboard[3] w.r.t. the reference to ContainerRequestFilter
> 
> These 2(3) features are separate concerns and the ContainerRequestFilter of 
> the JAX-RS whiteboard spec doesn't apply to the Servlets of the Http 
> Whiteboard. You probably just want a regular old servlet Filter[4]
> 
> Now it's possible that you are talking about some other runtime that packs 
> all these things together. If so, you probably want to ask the implementors 
> about this.
> 
> Hope that helps clear things up,
> - Ray
> 
> [1] https://osgi.org/specification/osgi.cmpn/7.0.0/service.http.html 
> 
> [2] 
> https://osgi.org/specification/osgi.cmpn/7.0.0/service.http.whiteboard.html 
> 
> [3] https://osgi.org/specification/osgi.cmpn/7.0.0/service.jaxrs.html 
> 
> [4] 
> https://osgi.org/specification/osgi.cmpn/7.0.0/service.http.whiteboard.html#d0e121055
>  
> 
> 
> On Tue, Jan 29, 2019 at 9:59 AM Nhut Thai Le via osgi-dev 
> mailto:osgi-dev@mail.osgi.org>> wrote:
> Hello,
> 
> I have a component implementing ContainerRequestFilter to intercept REST 
> calls and another component implements servlet.Filter. Both have PROTOTYPE 
> scope, my understanding is that these filter are instantiated and activated 
> for each web request but yesterday when i put some breakpoints in the 
> @activate method, i did not see them get called when a web request arrives. 
> Did I miss something? If they are not init/activate per request why are they 
> recomeded to be prototype?
> 
> Thai Le
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org 
> https://mail.osgi.org/mailman/listinfo/osgi-dev 
> 
> 
> -- 
> Raymond Augé  (@rotty3000)
> Senior Software Architect Liferay, Inc.  (@Liferay)
> Board Member & EEG Co-Chair, OSGi Alliance  (@OSGiAlliance)
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Scanning classes at runtime

2019-01-23 Thread Tim Ward via osgi-dev
Depending on how much control you have over the bundles providing the types you 
could also look at providing your own extender capabiltiy/requirement model 
which would further restrict your search space, and potentially allow further 
optimisations. You should take a look at the OSGi CDI specification from the 
enterprise R7 release (currently in draft) which does something very much like 
this.

Tim

> On 23 Jan 2019, at 16:48, Tim Ward  wrote:
> 
> Hi,
> 
> You can optimise this pretty heavily by using the bundle wirings to look for 
> bundles which are wired to the same API packages as you (the one(s) that 
> contain the relevant interface/supertype/annotation). This way you can avoid 
> scanning bundles which can’t possibly contain relevant types.
> 
> Best Regards,
> 
> Tim
> 
>> On 23 Jan 2019, at 16:10, Thomas Driessen via osgi-dev 
>> mailto:osgi-dev@mail.osgi.org>> wrote:
>> 
>> Hi,
>> 
>> for a project of mine I need to mimic the behavior of a 
>> ServletContainerInitializer [1], but with the dynamism of OSGi in mind.
>> 
>> Therefore, I need to be able to find all classes that implement/extend a 
>> specific class or are annotated with a specific annotation at runtime.
>> 
>> Is there a better way to do so that I'm not aware of than to scan each class 
>> of every bundle? Are there maybe framework hooks that would help me 
>> accomplish this?
>> 
>> I'm thankful for every hint you can provide.
>> 
>> Kind regards,
>> Thomas
>> 
>> [1] 
>> https://docs.oracle.com/javaee/6/api/javax/servlet/ServletContainerInitializer.html
>>  
>> ___
>> OSGi Developer Mail List
>> osgi-dev@mail.osgi.org 
>> https://mail.osgi.org/mailman/listinfo/osgi-dev 
>> 

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Scanning classes at runtime

2019-01-23 Thread Tim Ward via osgi-dev
Hi,

You can optimise this pretty heavily by using the bundle wirings to look for 
bundles which are wired to the same API packages as you (the one(s) that 
contain the relevant interface/supertype/annotation). This way you can avoid 
scanning bundles which can’t possibly contain relevant types.

Best Regards,

Tim

> On 23 Jan 2019, at 16:10, Thomas Driessen via osgi-dev 
>  wrote:
> 
> Hi,
> 
> for a project of mine I need to mimic the behavior of a 
> ServletContainerInitializer [1], but with the dynamism of OSGi in mind.
> 
> Therefore, I need to be able to find all classes that implement/extend a 
> specific class or are annotated with a specific annotation at runtime.
> 
> Is there a better way to do so that I'm not aware of than to scan each class 
> of every bundle? Are there maybe framework hooks that would help me 
> accomplish this?
> 
> I'm thankful for every hint you can provide.
> 
> Kind regards,
> Thomas
> 
> [1] 
> https://docs.oracle.com/javaee/6/api/javax/servlet/ServletContainerInitializer.html
>  
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org 
> https://mail.osgi.org/mailman/listinfo/osgi-dev 
> 
___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] OSGi Http Client Missing Requirement

2019-01-21 Thread Tim Ward via osgi-dev
Hi Paul,

This is a different problem, in that where the previous requirement for the 
“osgi.service” namespace was an “active” time requirement (and therefore not 
enforced at runtime), this is a resolve time requirement that *will* prevent 
your bundles from resolving/starting in the framework. It is also the case that 
osgi.wiring.package namespace corresponds to the Import-Package part of your 
bundle manifest - your bundle uses types from the org.apache.http package and 
so you need to deploy a bundle which contains and exports that package. This is 
therefore not so much a small packaging bug, but either a significant 
dependency graph issue (the right bundle doesn’t end up in your index when it 
should) or a significant design issue (a core part of the API isn’t exported) 
in the HttpClient project.

As a separate question - is there a reason that you are trying to use the 
Apache HttpClient? OSGi enRoute makes use of the JAX-RS Whiteboard which 
includes a simple way to access the JAX-RS specification client implementation. 
If it’s possible for you to use the existing JAX-RS Client instead then you 
could avoid adding the duplicate function.

Best Regards,

Tim

> On 21 Jan 2019, at 05:56, Paul F Fraser via osgi-dev  
> wrote:
> 
> Using the Apache httpclient-osgi bundle 
> https://mvnrepository.com/artifact/org.apache.httpcomponents/httpclient-osgi/4.5.6
>  in a (new) enRoute project I get this error-
> 
> [ERROR] Failed to execute goal 
> biz.aQute.bnd:bnd-export-maven-plugin:4.1.0:export (default) on project 
> qQNESiteAPP:
> Unable to resolve <>: missing requirement 
> osgi.identity;filter:='(osgi.identity=net.qnenet.qnesite.qHttpClientImpl)'
> [caused by: Unable to resolve net.qnenet.qnesite.qHttpClientImpl 
> version=0.0.1.201901210415:
> missing requirement 
> osgi.wiring.package;filter:='(&(osgi.wiring.package=org.apache.http))'] -> 
> [Help 1]
> 
> Is this a similar problem to that discussed where Tim Ward stated -
> 
> "Now in fact the Felix Http Jetty implementation used in OSGi enRoute *does* 
> provide this service, however it is missing the metadata from the manifest 
> saying that it provides this service. This is a packaging bug in the Felix 
> Jetty bundle, and is why the resolve fails."
> 
> Checking the exports from the bundle, "org.apache.http" appears many times 
> but I cannot see a distinct export for that package although I could have 
> missed it.
> 
> Is there a problem with the apache httpclient for OSGi purposes?
> 
> Paul Fraser
> 
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Vaadin flow works in bnd workspace, fails in enRoute project

2019-01-16 Thread Tim Ward via osgi-dev
Hi Paul,

The error you’re seeing is a resolution failure because the 
"com.vaadin.flow.osgi version=1.2.3” bundle has a requirement in the 
“osgi.service” namespace for the service with interface 
“org.osgi.service.http.HttpService”. This basically says that the Vaadin flow 
bundle is trying to use the HttpService in some way, and therefore the exported 
application needs to include a bundle which provides the HttpService.

Now in fact the Felix Http Jetty implementation used in OSGi enRoute *does* 
provide this service, however it is missing the metadata from the manifest 
saying that it provides this service. This is a packaging bug in the Felix 
Jetty bundle, and is why the resolve fails.

There is a further question, which is why on earth the Vaadin Flow bundle is 
using the HttpService? It would be better and easier for them to use the Http 
Whiteboard than to use the old HttpService to provide content…

Best Regards,

Tim

> On 16 Jan 2019, at 04:45, Paul F Fraser via osgi-dev  
> wrote:
> 
> Hi,
> 
> Following the process below I have sucessfully managed to run a vaadin flow 
> bundle in OSGi.
> 
> Create vaadin flow bundle in enroute maven project. 
> https://github.com/QNENet/enRouteQNEFlow-0.0.1 
> 
> Use flow bundle in bnd workspace and export as jar, which of course contains 
> all necessary dependencies.
> java -jar flow.jar  works at localhost:8080 
> https://s3-ap-southeast-2.amazonaws.com/qnenet/vaadinFlow/flow.jar 
> 
> create application from enRoute archetype
> Add all dependencies as used in exported bnd workspace flow.jar
> enRoute app fails.
> [ERROR] Failed to execute goal 
> biz.aQute.bnd:bnd-export-maven-plugin:4.1.0:export (default) on project 
> flow-app: Unable to resolve <>: missing requirement 
> osgi.identity;filter:='(osgi.identity=com.vaadin.flow.osgi) 
> ' [caused by: Unable to 
> resolve com.vaadin.flow.osgi version=1.2.3: missing requirement 
> osgi.service;filter:='(objectClass=org.osgi.service.http.HttpService)';effective:='active
>  
> ']
>  -> [Help 1]
> 
> Any assistance to solve this last (?) hurdle to having an OSGi work 
> environment for vaadin flow development would be most appreciated.
> 
> Thanks
> 
> Paul Fraser
> 
> 
> 
> 
> 
> 
> 
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Push Streams Event grouping and batching

2019-01-07 Thread Tim Ward via osgi-dev
I wrote the whole of this myself this morning - I tend to do examples as unit 
tests because they’re easy for people to try out and run. 

I’m not aware of a substantial PushStream teat bucket other than the OSGi 
Compliance Test Suite, which is available to members. 

If you want to start an open source PushStream implementation project then I’m 
sure it could live alongside the Promises implementation in Apache Aries!

Tim. 

Sent from my iPhone

> On 7 Jan 2019, at 14:01, Alain Picard  wrote:
> 
> Tim,
> 
> Thanks, will review an apply.
> 
> BTW, this seems to come from some of the tests, and I've been looking where 
> tests are located, as this is often very revealing in how various aspects 
> actually work and I have not been able to find them. Are they on Github 
> somewhere?
> 
> Alain
> 
> 
>> On Mon, Jan 7, 2019 at 8:11 AM Tim Ward  wrote:
>> This use case is effectively a “debouncing” behaviour, which is possible to 
>> implement with a little thought.
>> 
>> There are a couple of ways to attempt it. This one uses the asyncMap 
>> operation to asynchronously gather the events until it either times out the 
>> promise or it hits the maximum stream size. Note that you have to filter out 
>> the “empty” lists that are used to resolve the promises which are being 
>> aggregated into the window. The result of this is a window which starts on 
>> the first event arrival and then buffers the events for a while. The next 
>> window isn’t started until the next event
>> 
>> 
>> Best Regards,
>> 
>> Tim
>> 
>> 
>>  @Test
>>  public void testWindow2() throws InvocationTargetException, 
>> InterruptedException {
>> 
>>  PushStreamProvider psp = new PushStreamProvider();
>> 
>>  SimplePushEventSource sps = 
>> psp.createSimpleEventSource(Integer.class);
>> 
>>  Promise>> result = 
>> psp.createStream(sps)
>>  .asyncMap(2, 0, new StreamDebouncer(
>>  new 
>> PromiseFactory(PromiseFactory.inlineExecutor()), 
>>  10, Duration.ofSeconds(1)))
>>  .filter(c -> !c.isEmpty())
>>  .collect(Collectors.toList());
>> 
>>  new Thread(() -> {
>> 
>>  for (int i = 0; i < 200;) {
>> 
>>  for (int j = 0; j < 23; j++) {
>>  sps.publish(i++);
>>  }
>> 
>>  try {
>>  System.out.println("Burst finished, now 
>> at " + i);
>>  Thread.sleep(2000);
>>  } catch (InterruptedException e) {
>>  sps.error(e);
>>  break;
>>  }
>>  }
>> 
>>  System.out.println("Closing generator");
>>  sps.close();
>> 
>>  }).start();
>> 
>>  System.out.println(result.getValue().toString());
>> 
>>  }
>>  
>>  public static class StreamDebouncer implements Function> extends Collection>> {
>> 
>>  private final PromiseFactory promiseFactory;
>>  private final int maxSize;
>>  private final Duration maxTime;
>>  
>>  private final Object lock = new Object();
>>  
>>  private List currentWindow;
>>  private Deferred> currentDeferred;
>> 
>>  public StreamDebouncer(PromiseFactory promiseFactory, int 
>> maxSize, Duration maxTime) {
>>  this.promiseFactory = promiseFactory;
>>  this.maxSize = maxSize;
>>  this.maxTime = maxTime;
>>  }
>> 
>>  @Override
>>  public Promise> apply(T t) throws Exception {
>>  
>>  Deferred> deferred = null;
>>  Collection list = null;
>>  boolean hitMaxSize = false;
>>  synchronized (lock) {
>>  if(currentWindow == null) {
>>  currentWindow = new 
>> ArrayList<>(maxSize);
>>  currentDeferred = 
>> promiseFactory.deferred();
>>  deferred = currentDeferred;
>>  list = currentWindow;
>>  }
>>  currentWindow.add(t);
>>  if(currentWindow.size() == maxSize) {
>>  hitMaxSize = true;
>>  deferred = currentDeferred;
>>  currentDeferred = null;
>>  list = currentWindow;
>>

Re: [osgi-dev] Push Streams Event grouping and batching

2019-01-07 Thread Tim Ward via osgi-dev
This use case is effectively a “debouncing” behaviour, which is possible to 
implement with a little thought.

There are a couple of ways to attempt it. This one uses the asyncMap operation 
to asynchronously gather the events until it either times out the promise or it 
hits the maximum stream size. Note that you have to filter out the “empty” 
lists that are used to resolve the promises which are being aggregated into the 
window. The result of this is a window which starts on the first event arrival 
and then buffers the events for a while. The next window isn’t started until 
the next event


Best Regards,

Tim


@Test
public void testWindow2() throws InvocationTargetException, 
InterruptedException {

PushStreamProvider psp = new PushStreamProvider();

SimplePushEventSource sps = 
psp.createSimpleEventSource(Integer.class);

Promise>> result = 
psp.createStream(sps)
.asyncMap(2, 0, new StreamDebouncer(
new 
PromiseFactory(PromiseFactory.inlineExecutor()), 
10, Duration.ofSeconds(1)))
.filter(c -> !c.isEmpty())
.collect(Collectors.toList());

new Thread(() -> {

for (int i = 0; i < 200;) {

for (int j = 0; j < 23; j++) {
sps.publish(i++);
}

try {
System.out.println("Burst finished, now 
at " + i);
Thread.sleep(2000);
} catch (InterruptedException e) {
sps.error(e);
break;
}
}

System.out.println("Closing generator");
sps.close();

}).start();

System.out.println(result.getValue().toString());

}

public static class StreamDebouncer implements Function>> {

private final PromiseFactory promiseFactory;
private final int maxSize;
private final Duration maxTime;

private final Object lock = new Object();

private List currentWindow;
private Deferred> currentDeferred;

public StreamDebouncer(PromiseFactory promiseFactory, int 
maxSize, Duration maxTime) {
this.promiseFactory = promiseFactory;
this.maxSize = maxSize;
this.maxTime = maxTime;
}

@Override
public Promise> apply(T t) throws Exception {

Deferred> deferred = null;
Collection list = null;
boolean hitMaxSize = false;
synchronized (lock) {
if(currentWindow == null) {
currentWindow = new 
ArrayList<>(maxSize);
currentDeferred = 
promiseFactory.deferred();
deferred = currentDeferred;
list = currentWindow;
}
currentWindow.add(t);
if(currentWindow.size() == maxSize) {
hitMaxSize = true;
deferred = currentDeferred;
currentDeferred = null;
list = currentWindow;
currentWindow = null;
}
}

if(deferred != null) {
if(hitMaxSize) {
// We must resolve this way round to 
avoid racing
// the timeout and ending up with empty 
lists in
// all the promises

deferred.resolve(Collections.emptyList());
return promiseFactory.resolved(list);
} else {
final Collection finalList = list;
return deferred.getPromise()

.timeout(maxTime.toMillis())
.recover(x -> {
   

Re: [osgi-dev] PushStream Question

2018-12-10 Thread Tim Ward via osgi-dev
Hi Clément,

You should raise a bug about the JavaDoc/implementation inconsistency in the 
OSGi bugzilla. The StackOverflow question is now answered, hopefully to your 
satisfaction.

Best Regards,

Tim

> On 9 Dec 2018, at 21:35, Clément Delgrange via osgi-dev 
>  wrote:
> 
> Hi all,
> 
> I have two questions related to the OSGi PushStream implementation. The first 
> one is on Stackoverflow.com 
> ; the 
> second is the publish method of the SimplePushEventSource says that it throws 
> a IllegalStateException if the source is closed:
> 
> /**
> * Asynchronously publish an event to this stream and all connected
> * {@link PushEventConsumer} instances. When this method returns there is no
> * guarantee that all consumers have been notified. Events published by a
> * single thread will maintain their relative ordering, however they may be
> * interleaved with events from other threads.
> *
> * @param t
> * @throws IllegalStateException if the source is closed
> */
>  void publish(T t);
> 
> But in the implementation it only returns:
> @Override
> public void publish(T t) {
> enqueueEvent(PushEvent.data(t));
> }
> 
> private void enqueueEvent(PushEvent event) {
> synchronized (lock) {
> if (closed || connected.isEmpty()) {
> return;
> }
> }
> 
> try {
> queuePolicy.doOffer(queue, event);
> boolean start;
> synchronized (lock) {
> start = !waitForFinishes && semaphore.tryAcquire();
>}
>if (start) {
>   startWorker();
>}
>} catch (Exception e) {
>close(PushEvent.error(e));
>throw new IllegalStateException("The queue policy threw an exception", 
> e);
>   }
> }
> 
> When the exception is thrown? I have tested with the following code:
> 
> source.close();
> source.publish( 1 );
> 
> and effectively it only returns. 
> 
> Thanks
> --
> Clément Delgrange  >
> 
> 
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] OSGi connecting to SqlServer with Apache Aries

2018-12-09 Thread Tim Ward via osgi-dev
It sounds like you have done the right things, but my guess is that you don’t 
have a JDBC service implementation for SQLServer. H2 and Postgres have done the 
work to implement the OSGi standard (it’s pretty small), but quite a few other 
providers haven’t. 

You could either roll your own adapter in 100 lines of code, or grab one from 
Open Source, for example 
https://github.com/ops4j/org.ops4j.pax.jdbc/blob/master/pax-jdbc-mssql/pom.xml

This will register the DataSourceFactory service needed by Aries, and 
everything should work from there.

Best Regards,

Tim

Sent from my iPhone

> On 8 Dec 2018, at 18:05, Jim Rayburn via osgi-dev  
> wrote:
> 
> My environment is Eclipse and BND using Bndtools. I have an application, 
> providerapi, provider, persistenceapi and persistenceprovider bundles. I have 
> a configuration.json file in the application bundles 
> resource/OSGI-INF/configurator/ folder. I configured it to connect to a 
> postgres database. I provided eclipselink parameters in the persistence.xml 
> file to drop and create the database schema and tables. It all works using 
> the postgres database.
>  
> When I configure it to connect to a MS Sql Server (2012). I am using the 
> com.microsoft.sqlserver:mssql-jdbc:jar:7.1.3.jre8-preview bundle.
>  
> …
> "osgi.jdbc.driver.class": 
> "com.microsoft.sqlserver.jdbc.SqlServerDriver",
> "url": "jdbc:sqlserver://127.0.0.1:1433/db",
> …
>  
> I verified that I am using a sys_admin user so permissions should not be 
> causing an issue.
>  
> For postgres I see it using the zaxxer bundle (HikariPool) but I don’t get 
> the same output or even errors (that differ from accessing postgres) when 
> trying to connect to the SqlServer.
>  
> Thank you for any help you may be able to provide.
>  
> Jim
>  
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev
___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] enRoute JPA example questions

2018-12-07 Thread Tim Ward via osgi-dev
Hi Clément,

> The de-coupling is not the fact that they are DTOs? If the API had defined 
> Interfaces or Classes the de-coupling would be the same.
In this case the decoupling is from the data model. If the DTO types were JPA 
entities (i.e. they had JPA annotations on them) then everyone would be coupled 
to the JPA data model, and in turn, the database. You are correct that the 
types don’t need to be DTOs for this to be the case, however DTOs are a good 
choice because they force you to treat the exchanged objects as pure data, 
rather than an implementation specific behaviour or mapping. Perhaps a slightly 
better wording would be "Because of the de-coupling provided by the DTO 
 package, all we need do is 
re-implement dao-impl…”

> I understand the usage of DTOs for the REST service (as data are 
> leaving/coming) but not for the *DAO services API. The actual data leaving 
> the system are the *Entity classes in the implementation bundle (so the 
> transferable objects are converted into other transferable objects!).


As I mention above, it would be very bad for the DAO to use entity types in its 
API as this would force the implementation decision. The implementing bundle 
should be free to use JDBC, JPA, NoSQL or whatever it wants to store and 
retrieve values. The moment that you start talking about Entity Classes you’re 
already most of the way to an ORM implementation, which is one heck of an 
implementation internals leak.

> Also the fact that the DTOs are exported and used in the service contract in 
> the API bundle, the REST-API (and so the clients) is coupled to the internal 
> representation of the Java application. 
The REST API is actually not coupled to the DTO representation - the REST 
service implementation could do whatever transformation it chose internally. 
For simplicity we have chosen to keep the REST responses close to the DTOs, but 
we could have quite easily changed all of the field names in the JSON. Again, 
the OSGi implementations communicate using an API, but they can convert between 
internal and external representations as needed. In this case the external 
“REST” representation doesn’t require a transformation, but it could easily be 
transformed to if required.

> I thought the DTOs was more data-api specific to a provider bundle, such as : 
> some-framework-rest-provider (with private DTOs) --adapt--> Java application 
> (with domain interface/class) --adapt--> jpa-dao-provider (with private DTOs) 
> .

It’s quite the opposite - the DTOs are the public inter-bundle communication 
data. They are used this way in a lot of OSGi specifications.

> If in contrary the purpose of the DTOs is to have one consistent data 
> representation which evolves as soon as it is required by one of the system 
> (rest, java application, database-schema), how to deal with framework 
> specific annotations?

Objects annotated with framework specific annotations belong on the *inside* of 
the bundle using the framework - the OSGi enRoute micro service example is a 
great place to look. The JPA entities are internal to the JPA implementation of 
the Data Access Service, as it is the only user bundle that knows that JPA is 
being used.

Best Regards,

Tim

> On 6 Dec 2018, at 19:30, Clément Delgrange via osgi-dev 
>  wrote:
> 
> Hi,
> 
> I have some questions regarding the OSGi enRoute microservice example and the 
> usage of DTOs. (enroute-tutorial-jpa 
> , 
> enroute-dto ).
> 
> I think I understand the purpose of DTOs (easy to write, easy to process, 
> useful when data leave the system) but their usage in the JPA example is not 
> clear:
> 
> Because of the de-coupling provided by the DTOs 
> ’s, all we need do is 
> re-implement dao-impl...
> 
> The de-coupling is not the fact that they are DTOs? If the API had defined 
> Interfaces or Classes the de-coupling would be the same.
> I understand the usage of DTOs for the REST service (as data are 
> leaving/coming) but not for the *DAO services API. The actual data leaving 
> the system are the *Entity classes in the implementation bundle (so the 
> transferable objects are converted into other transferable objects!).
> Also the fact that the DTOs are exported and used in the service contract in 
> the API bundle, the REST-API (and so the clients) is coupled to the internal 
> representation of the Java application. 
> I thought the DTOs was more data-api specific to a provider bundle, such as : 
> some-framework-rest-provider (with private DTOs) --adapt--> Java application 
> (with domain interface/class) --adapt--> jpa-dao-provider (with private DTOs) 
> .
> 
> If in contrary the purpose of the DTOs is to have one consistent data 
> representation which evolves as soon as it is required by one of the system 
> (rest, java 

Re: [osgi-dev] Problem starting Aries JAX-RS Whiteboard

2018-12-03 Thread Tim Ward via osgi-dev
Hi Alain,

Sorry this took you so long to work out - Sunday isn’t the best time to get a 
response I’m afraid! 


In summary - this should have “just worked” for you. I’m sorry that it didn’t, 
but unfortunately in this case the web container implementation that you are 
using has some packaging bugs that make it break in certain scenarios.

I would suggest raising bugs agains PAX-Web to see if you can get them to fix 
these issues.


Explanation follows:

> By debugging the Aries Whiteboard activator, it is looking for a service 
> matching; 
> "(&(objectClass=org.osgi.service.http.runtime.HttpServiceRuntime)(osgi.http.endpoint=*))”

Part of the JAX-RS Whiteboard specification requires the JaxrsServiceRuntime 
service to advertise its root URI as a service property so that other services 
can work out where the resources will be hosted. The Aries JAX-RS Whiteboard 
implementation does not provide its own web container and instead makes use of 
the Http Whiteboard. This means that it has to query the HttpServiceRuntime 
service to work out what the base URI is. Note that this is an implementation 
decision - Aries JAX-RS Whiteboard could have provided its own embedded web 
container, but the consensus was that it should focus on JAX-RS and allow 
people to combine it into their existing web applications.

> This was colliding between org.ops4j.pax.web.pax-web-api (which I see as 
> being used by the working copy) and in my case "org.eclipse.osgi.services" 
> and "org.osgi.service.http.whiteboard 1.1" as well. I got rid of 
> "org.eclipse.osgi.services"and instead locally use the org.osgi.* bundles and 
> for the http.whiteboard, I lowered my import package to 1.0 to match paxweb 
> export package version and the issue is gone. I honestly think that 
> "org.eclipse.osgi.services" is evil.

Much like the osgi.cmpn bundle, bundles which aggregate a bunch of otherwise 
unrelated APIs into an uber bundle are a bad thing. This is why the osgi.cmpn 
bundle has been made unresolvable (i.e. not possible to deploy) and the 
org.eclipse.osgi.services bundle should do the same.

That being said - the main issue isn’t related to the org.eclipse.osgi.services 
bundle, but actually as a result of bad metadata in the pax web Http Whiteboard 
implementation. The Aries JAX-RS Whiteboard contains the following requirement:

Require-Capability: 
osgi.implementation;filter:="(&(osgi.implementation=osgi.http)(version>=1)(!(version>=2)))"

This requirement is there to ensure two things:

To make sure that an Http Whiteboard implementation is resolved and deployed
To make sure that the OSGi framework wires up the package space in the correct 
way

Item 2 is the problem that you are seeing, and it’s because the PAX-Web 
implementation is failing to properly provide the implementation capability. 
This is how it is supposed to be provided 
(https://osgi.org/specification/osgi.cmpn/7.0.0/service.http.whiteboard.html#d0e121954
 
)

osgi.implementation;osgi.implementation="osgi.http";version:Version="1.0";uses:="javax.servlet,javax.servlet.http,org.osgi.service.http.context,org.osgi.service.http.whiteboard"

The bundle org.ops4j.pax.web.pax-web-runtime does do exactly this, but the 
person who did it failed to understand what a uses constraint actually meant. 
Specifically it’s an instruction to the resolver to say that “A bundle wired to 
this capability must use the same package instances as the provider of the 
capability for these packages”. The problem is that the 
org.ops4j.pax.web.pax-web-runtime doesn’t use these packages! A uses constraint 
is only valuable if the bundle either imports or exports the package that it 
refers to, otherwise the resolver is free to make any choice it likes when 
resolving your bundles. This is what you were seeing happen, and is why the 
JAX-RS Whiteboard was wired to an incompatible version of the Http Whiteboard 
packages.

To fix this PAX Web needs to be updated to put the http whiteboard capability 
on the correct bundle (in this case 
org.ops4j.pax.web.pax-web-extender-whiteboard) which is the bundle that appears 
to actually implement the Http Whiteboard, and is wired to all the packages. I 
would also note that Pax Web puts the osgi.service capability for the 
HttpServiceRuntime service on the org.ops4j.pax.web.pax-web-runtime bundle, 
even though this bundle is not the one that provides the service! This could 
lead to yet more nonsense when provisioning.

Best Regards,

Tim
  
> On 2 Dec 2018, at 16:21, Alain Picard via osgi-dev  
> wrote:
> 
> Ok,
> 
> After another 10 hours of frustration, I finally solved the issue. By 
> debugging the Aries Whiteboard activator, it is looking for a service 
> matching; 
> "(&(objectClass=org.osgi.service.http.runtime.HttpServiceRuntime)(osgi.http.endpoint=*))".
>  Happens that I am having resolver chain issues with this specifically (and 
> my colleague is not, 

Re: [osgi-dev] Correct way to handle error/recovery in PushStream

2018-11-29 Thread Tim Ward via osgi-dev
That looks reasonable to me. You could also do it using an error handler on the 
promise returned by forEach(), but what you’re doing is also fine.

Depending on your threading model you may lose some events, specifically if 
there’s a buffer in the SimplePushEventSource that you are using (which there 
is unless you’ve worked to avoid it) with a parallelism greater than 1 (the 
default is one) then some events may be delivered into the failed push stream 
by other threads (if they are available) and discarded. If there is only one 
worker thread, or a parallelism of one, then this won’t happen and as your 
“onError” connects a new listener *before* the old one returns its back 
pressure to the SimplePushEventSource.

Note that this whole reasoning goes out of the window if you add another buffer 
between the SimplePushEventSource and your error handler.

I hope this helps,

Tim

> On 28 Nov 2018, at 23:32, Alain Picard via osgi-dev  
> wrote:
> 
> We ended up with an exception thrown in the forEach of our stream, which is a 
> stream to manage notifications and that should be always on. Nothing got 
> reported, but the stream stopped working. Finally testing isConnected 
> reported false and then found the source of the exception.
> 
> Now digging, we found out that there are onError and onClose, but couldn't 
> find any example. First time tried to insert after the createStream but 
> debugging found that this seemed to be tied to the filter intermediate 
> stream. Now moving the onClose, is working but very unsure if the pattern is 
> correct.
> 
> private void handleEvents() {
>psp.createStream(spes) //NOSONAR as we don't close it
>   .filter(isOfInterest())
>   .onError(e -> {
>   log.error(Messages.CNCI_0, e);
>   handleEvents(); // start a new stream
>   })
>   .forEach(entry -> entry.listener.notify(entry.notification)
>);
>  }
> 
> Thanks
> Alain
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] No service events for an embedded OSGi Framework?

2018-11-28 Thread Tim Ward via osgi-dev
My guess is that you are trying to start a framework from inside a framework, 
and then the code outside is using a different view of the ComponentFactory 
interface from the code inside. Normally you would need to use a system 
packages extra property when launching the inner framework (to share the 
package from outside to inside), or use reflection to access the service 
instance. 

Best Regards,

Tim

Sent from my iPhone

> On 28 Nov 2018, at 13:11, Thomas Driessen via osgi-dev 
>  wrote:
> 
> Hi,
> 
> I'm trying to setup an OSGi framework instance (Felix) then to install two 
> bundles and finally getting a reference to a ComponentFactory inside the 
> framework via a ServiceListener. No Exceptions are thrown but my 
> ServiceListener ist not getting any events. 
> 
> You can have a look at the code here:
> https://github.com/Sandared/jmh-benchmarks/blob/master/benchmarks/src/main/java/io/jatoms/jmh/OSGiBenchmark2.java
> 
> Or If you want to execute it and play  with it you can run it here:
> https://gitpod.io/#https://github.com/Sandared/jmh-benchmarks/blob/master/benchmarks/src/main/java/io/jatoms/jmh/OSGiBenchmark2.java
> 
> just type the following commands into the terminal:
> cd benchmarks
> mvn clean install
> java -jar target/benchmarks.jar
> 
> I also tried to acquire the service directly by starting the bundles, then 
> wait a second and then calling 
> bundleContext.getServiceReference(ComponentFactory.class). But when I tried 
> to turn this reference into a service via
> 
> ComponentFactory factory = bundleContext.getService(ref);
> 
> I get a ClassCastException stating that Felix SCR's 
> ComponentFactoryImplementation cannot be cast to ComponentFactory... which 
> doesn't make much sense to me, as the ComponentFactoryImplementation 
> implements ComponentFactory.
> 
> Am I doing something fundamentally wrong?
> 
> Kind regards,
> Thomas
> 
> 
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev
___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Disposing component instances

2018-11-25 Thread Tim Ward via osgi-dev
gt;>> Just a small note, I should have stated that my worry is about the unget 
>>>>> timing. I obviously have a reference to the object and this won't 
>>>>> disappear by itself, but if that service has other dynamic references 
>>>>> that go away and I keep using the service, I might be in trouble. But I 
>>>>> guess the template that I used already had a bit that issue with the 
>>>>> supplier (which we seldom use).
>>>>> 
>>>>> Alain
>>>>> 
>>>>>> On Thu, Aug 23, 2018 at 7:43 AM Alain Picard  
>>>>>> wrote:
>>>>>> Tim,
>>>>>> 
>>>>>> Based on your referenced javadoc, some more googling, I used and adapted 
>>>>>> from our own current tracker and supplier to create some Prototype 
>>>>>> versions. Tests are showing correct results, but this is not directly 
>>>>>> using the PrototypeServiceFactory, so I would appreciate a very quick 
>>>>>> confirmation that I'm not missing anything.
>>>>>> 
>>>>>> Thanks
>>>>>> 
>>>>>> Alain
>>>>>> 
>>>>>> 
>>>>>>> On Wed, Aug 22, 2018 at 11:54 AM Alain Picard  
>>>>>>> wrote:
>>>>>>> Thanks! I actually saw that being called by ComponentServiceObjects 
>>>>>>> while perusing the code.
>>>>>>> 
>>>>>>> Alain
>>>>>>> 
>>>>>>> 
>>>>>>>> On Wed, Aug 22, 2018 at 11:52 AM Tim Ward  wrote:
>>>>>>>> Registering a prototype service is almost as easy as registering a 
>>>>>>>> singleton service. Instead of registering a single object you register 
>>>>>>>> an instance of PrototypeServiceFactory. This will get called by the 
>>>>>>>> framework to get and release instances as needed.
>>>>>>>> 
>>>>>>>> Tim
>>>>>>>> 
>>>>>>>>> On 22 Aug 2018, at 16:49, Alain Picard  wrote:
>>>>>>>>> 
>>>>>>>>> Tim,
>>>>>>>>> 
>>>>>>>>> This helps quite a bit and clarifies a few points for me. As someone 
>>>>>>>>> who is migrating from a pre-DS environment and dealing with lots of 
>>>>>>>>> legacy, how can prototype scoped services be used outside of DS? That 
>>>>>>>>> would be fantastic. Right now we have a good solution to use 
>>>>>>>>> singleton services outside of DS but not for "factory" type services.
>>>>>>>>> 
>>>>>>>>> Thanks
>>>>>>>>> Alain
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>>> On Wed, Aug 22, 2018 at 11:27 AM Tim Ward  
>>>>>>>>>> wrote:
>>>>>>>>>> Hi Alain,
>>>>>>>>>> 
>>>>>>>>>> A "Prototype scoped" service is one where the client(s) can request 
>>>>>>>>>> an arbitrary number of instances of the “same” service, whereas a 
>>>>>>>>>> ComponentFactory is a mechanism for the clients to request an 
>>>>>>>>>> arbitrary number of differently configured component instances.
>>>>>>>>>> 
>>>>>>>>>> From the perspective of the component the key difference is that all 
>>>>>>>>>> of the instances of a prototype scoped component have the same 
>>>>>>>>>> component properties, and the instances created by the factory 
>>>>>>>>>> component have the combination of these component properties *plus* 
>>>>>>>>>> the properties passed to the factory.
>>>>>>>>>> 
>>>>>>>>>> In some senses prototype scoped services are better because they:
>>>>>>>>>> 
>>>>>>>>>> Don’t require the service implementation to use DS (they may wish to 
>>>>>>>>>> use something else)
>>>>>>>>>> Will have satisfied references and configurations (component 
>>>>>>>>>> 

Re: [osgi-dev] Disposing component instances

2018-11-25 Thread Tim Ward via osgi-dev
ug 2018, at 16:49, Alain Picard >> <mailto:pic...@castortech.com>> wrote:
>>> 
>>> Tim,
>>> 
>>> This helps quite a bit and clarifies a few points for me. As someone who is 
>>> migrating from a pre-DS environment and dealing with lots of legacy, how 
>>> can prototype scoped services be used outside of DS? That would be 
>>> fantastic. Right now we have a good solution to use singleton services 
>>> outside of DS but not for "factory" type services.
>>> 
>>> Thanks
>>> Alain
>>> 
>>> 
>>> On Wed, Aug 22, 2018 at 11:27 AM Tim Ward >> <mailto:tim.w...@paremus.com>> wrote:
>>> Hi Alain,
>>> 
>>> A "Prototype scoped" service is one where the client(s) can request an 
>>> arbitrary number of instances of the “same” service, whereas a 
>>> ComponentFactory is a mechanism for the clients to request an arbitrary 
>>> number of differently configured component instances.
>>> 
>>> From the perspective of the component the key difference is that all of the 
>>> instances of a prototype scoped component have the same component 
>>> properties, and the instances created by the factory component have the 
>>> combination of these component properties *plus* the properties passed to 
>>> the factory.
>>> 
>>> In some senses prototype scoped services are better because they:
>>> 
>>> Don’t require the service implementation to use DS (they may wish to use 
>>> something else)
>>> Will have satisfied references and configurations (component factories can 
>>> be given configuration which invalidates the registration resulting in an 
>>> error)
>>> 
>>> The main reason that you would use a Component Factory rather than a 
>>> prototype scoped service is if you genuinely want to have different 
>>> specialised configurations for each instance, and it doesn’t make sense to 
>>> use a managed service factory (i.e. the customised instances are only 
>>> interesting to one client, or must not be shared for some reason).
>>> 
>>> If your instances are identically configured (or can be, with an init 
>>> later) then a ComponentServiceObjects getService() call should be all you 
>>> need each time you need a new instance, followed by a call to 
>>> ungetService() later when you’re done with it.
>>> 
>>> Tim
>>> 
>>>> On 22 Aug 2018, at 12:06, Alain Picard >>> <mailto:pic...@castortech.com>> wrote:
>>>> 
>>>> On the 2nd part of the question regarding 
>>>> ComponentFactory/ComponentInstance vs Prototype/ComponentServiceObjects. I 
>>>> get the feeling that CSO should be favored, but I saw an old post from 
>>>> Scott Lewis about configuration and that is a bit close to some of my use 
>>>> cases.
>>>> 
>>>> I have cases where I have a Factory component that delivers instances and 
>>>> calls an init method to configure the component, or might sometimes return 
>>>> an existing matching one that is already cached (like per data connection 
>>>> instances). With ComponentFactory I can create a new instance, call init 
>>>> on the new instance and return the ComponentInstance. The caller can then 
>>>> call getInstance and call dispose when done. I struggle to find a 
>>>> correct/easy way to do this with CSO. Am I using the best approach or not?
>>>> 
>>>> Thanks
>>>> Alain
>>>> 
>>>> 
>>>> On Wed, Aug 22, 2018 at 3:46 AM Tim Ward via osgi-dev 
>>>> mailto:osgi-dev@mail.osgi.org>> wrote:
>>>> 
>>>> 
>>>>> On 21 Aug 2018, at 20:53, Paul F Fraser via osgi-dev 
>>>>> mailto:osgi-dev@mail.osgi.org>> wrote:
>>>>> 
>>>>> On 22/08/2018 5:40 AM, Paul F Fraser via osgi-dev wrote:
>>>>>> On 21/08/2018 10:00 PM, Tim Ward via osgi-dev wrote:
>>>>>>> Have you looked at what the OSC project does? It uses Vaadin, and uses 
>>>>>>> the ViewProvider interface to provide view instances. These 
>>>>>>> automatically have a detach listener added on creation so that they get 
>>>>>>> correctly disposed when their parent container is closed.
>>>>>>> 
>>>>>>> See 
>>>>>>> https://github.com/opensecuritycontroller/osc-core/blob/4441c96fe49e4b11ce6f380a440

Re: [osgi-dev] Circular references with Factory component

2018-11-23 Thread Tim Ward via osgi-dev
The critical part of the spec is available at 
https://osgi.org/specification/osgi.cmpn/7.0.0/service.component.html#service.component-factorycomponent
 

 - the key words are 

"SCR must register a Component Factory service on behalf of the component as 
soon as the component factory is satisfied." 

The Component Factory can’t be satisfied until its reference Y is satisfied, 
which in turn can’t be registered until its mandatory dependency X is 
satisfied. As we can see X won’t be registered until the ComponentFactory 
reference is satisfied. This prevents the whole thing from starting.

You need to break the cycle for this to work. Either you need to change the 
composition of your services, or you need to use Dynamic/Optional for one of 
the references.

I hope this helps,

Tim



> On 23 Nov 2018, at 07:52, Alain Picard via osgi-dev  
> wrote:
> 
> Been running into an issue with circular references dealing with 
> ComponentFactory and I'm a bit confused.
> 
> I have:
> @Component
> class A implements X {
> @Reference(target = CoreDeleteEObjects.CONFIG_TARGET)
> private ComponentFactory coreDeleteFactory;
> 
> }
> 
> and the factory component matching my target has a reference to Y which 
> itself has a reference to X. Granted this fails with "standard" references. 
> But here I was under the impression that the referenced service was the 
> ComponentFactory and that the service would get resolved at startup. Testing 
> indicates that I'm wrong.
> 
> What part of the puzzle am I missing? 
> 
> Thanks
> Alain
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] osgi and timer task

2018-11-19 Thread Tim Ward via osgi-dev
If you are asking “I observe this behaviour, is it correct?” then this sounds 
like a well written bundle with good behaviour. Threads started by a bundle 
when it starts should be stopped when it stops.  Similarly sockets opened, or 
other resources claimed, should be closed/released. 

If you are asking “Will OSGi automatically ensure that this behaviour occurs?” 
then no. As always you the programmer are responsible for cleaning up after 
yourself.

Best Regards,

Tim

Sent from my iPhone

> On 19 Nov 2018, at 17:05, Ali Parmaksız via osgi-dev  
> wrote:
> 
> Hi all,
> 
> I have an osgi bundle. And this bundle starts a timer. If i stop the bundle, 
> timer stops too?
> 
> -- 
> Ali Parmaksız
> 
> TEL:5052555693
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev
___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Push Stream reset/flush howto

2018-11-15 Thread Tim Ward via osgi-dev
> So are you proposing something like process an event and decide that you will 
> have to re-query, fails the promise with a “CacheInvalidatedException” and 
> then in the recovery function perform the re-query and then just keep 
> processing events on the stream?

Technically it would be a new PushStream operating on the results of the 
re-query, but yes. The result would then be folded into the Promise chained 
from the the recover() call.

Just to be extra brain-bending, the recovery function may also apply itself to 
the Promise it returns, that way you’ll continuously retry whenever the 
“CacheInvalidatedException" occurs.

For example:

public  Promise recoveryFunction(Promise p) {
Throwable t = p.getFailure();

if(t instanceof CacheInvalidatedException) {
 Promise newValue = reQuery();
 // This will repeatedly retry every time you get a Cache invalidation
 return newValue.recoverWith(this::recoveryFunction);
}
}


Tim

> On 15 Nov 2018, at 12:43, Alain Picard  wrote:
> 
> On Thu, Nov 15, 2018 at 7:25 AM Tim Ward  > wrote:
>>  My expectation is that after the "flush error event", I can again accept 
>> new published event and process those until I get another case where the 
>> cached information is invalidated (i.e. the notification event changes the 
>> result set and there is no way to simply update the cache and we have to 
>> re-run the query).
> 
> So you can’t do this with the same PushStream instance. This is because the 
> Pushstream instance is terminated after a terminal event (which an error 
> event is). This does, however, give you the opportunity to re-run the query.
> Got that, and why I was confused. 
> 
> One way to achieve this is by registering a recovery function with the 
> terminal promise from the PushStream. If the Promise fails with a 
> “CacheInvalidatedException” you can re-run the same PushStream flow again 
> (and again…) until the cache is no longer updated.
> So are you proposing something like process an event and decide that you will 
> have to re-query, fails the promise with a “CacheInvalidatedException” and 
> then in the recovery function perform the re-query and then just keep 
> processing events on the stream?
> 
> Does that make sense?
> 
> Tim
> 
>> On 15 Nov 2018, at 11:22, Alain Picard > > wrote:
>> 
>> Tim,
>> 
>> One thing that I'm unsure about your suggestion. My expectation is that 
>> after the "flush error event", I can again accept new published event and 
>> process those until I get another case where the cached information is 
>> invalidated (i.e. the notification event changes the result set and there is 
>> no way to simply update the cache and we have to re-run the query). I am 
>> unclear as to how this would happen.
>> 
>> 
>> Thanks
>> Alain
>> 
>> 
>> On Thu, Nov 15, 2018 at 5:52 AM Tim Ward > > wrote:
>> The correct option will depend on what you want to happen. If you use an 
>> endOfStream() or close() operation then you are telling your push stream 
>> that the data has reached a “natural end”. This will cause the promise at 
>> the end of your stream to resolve normally. This may be the right thing in 
>> some cases.
>> 
>> In the case that you describe I agree that endOfStream() and close() don’t 
>> seem like the correct approach. The data hasn’t reached a natural 
>> conclusion, and in fact the processing so far has been invalidated. This is 
>> the perfect opportunity to send an error event! You can send an Exception 
>> indicating that the result set was not completely processed, and potentially 
>> has been invalidated. The default behaviour of the push stream will then be 
>> to fail the pipeline, however a terminal “forEachEvent” handler could still 
>> choose to do something useful with the information. For example it might 
>> choose to trigger recreation of the stream against the updated dataset!
>> 
>> I hope this helps,
>> 
>> Tim
>> 
>> > On 15 Nov 2018, at 09:52, Alain Picard via osgi-dev 
>> > mailto:osgi-dev@mail.osgi.org>> wrote:
>> > 
>> > We are using a push stream to process data change notifications against a 
>> > cached result set. Some of those notifications can result in directly 
>> > applying updates to the result set, while other will force us to 
>> > invalidate the cached result set.
>> > 
>> > When we do a requery, we want to make sure that any subsequent event sent 
>> > to the push stream can be cleared and ignore. Looking at endOfStream() or 
>> > close() doesn't seem to be the way to go. Only solution for now is to 
>> > switch to a new stream, but wondering if that is the right way to do it.
>> > 
>> > Regards,
>> > Alain
>> > ___
>> > OSGi Developer Mail List
>> > osgi-dev@mail.osgi.org 
>> > https://mail.osgi.org/mailman/listinfo/osgi-dev 
>> > 
>> 

Re: [osgi-dev] Push Stream reset/flush howto

2018-11-15 Thread Tim Ward via osgi-dev
>  My expectation is that after the "flush error event", I can again accept new 
> published event and process those until I get another case where the cached 
> information is invalidated (i.e. the notification event changes the result 
> set and there is no way to simply update the cache and we have to re-run the 
> query).

So you can’t do this with the same PushStream instance. This is because the 
Pushstream instance is terminated after a terminal event (which an error event 
is). This does, however, give you the opportunity to re-run the query.

One way to achieve this is by registering a recovery function with the terminal 
promise from the PushStream. If the Promise fails with a 
“CacheInvalidatedException” you can re-run the same PushStream flow again (and 
again…) until the cache is no longer updated.

Does that make sense?

Tim

> On 15 Nov 2018, at 11:22, Alain Picard  wrote:
> 
> Tim,
> 
> One thing that I'm unsure about your suggestion. My expectation is that after 
> the "flush error event", I can again accept new published event and process 
> those until I get another case where the cached information is invalidated 
> (i.e. the notification event changes the result set and there is no way to 
> simply update the cache and we have to re-run the query). I am unclear as to 
> how this would happen.
> 
> 
> Thanks
> Alain
> 
> 
> On Thu, Nov 15, 2018 at 5:52 AM Tim Ward  > wrote:
> The correct option will depend on what you want to happen. If you use an 
> endOfStream() or close() operation then you are telling your push stream that 
> the data has reached a “natural end”. This will cause the promise at the end 
> of your stream to resolve normally. This may be the right thing in some cases.
> 
> In the case that you describe I agree that endOfStream() and close() don’t 
> seem like the correct approach. The data hasn’t reached a natural conclusion, 
> and in fact the processing so far has been invalidated. This is the perfect 
> opportunity to send an error event! You can send an Exception indicating that 
> the result set was not completely processed, and potentially has been 
> invalidated. The default behaviour of the push stream will then be to fail 
> the pipeline, however a terminal “forEachEvent” handler could still choose to 
> do something useful with the information. For example it might choose to 
> trigger recreation of the stream against the updated dataset!
> 
> I hope this helps,
> 
> Tim
> 
> > On 15 Nov 2018, at 09:52, Alain Picard via osgi-dev  > > wrote:
> > 
> > We are using a push stream to process data change notifications against a 
> > cached result set. Some of those notifications can result in directly 
> > applying updates to the result set, while other will force us to invalidate 
> > the cached result set.
> > 
> > When we do a requery, we want to make sure that any subsequent event sent 
> > to the push stream can be cleared and ignore. Looking at endOfStream() or 
> > close() doesn't seem to be the way to go. Only solution for now is to 
> > switch to a new stream, but wondering if that is the right way to do it.
> > 
> > Regards,
> > Alain
> > ___
> > OSGi Developer Mail List
> > osgi-dev@mail.osgi.org 
> > https://mail.osgi.org/mailman/listinfo/osgi-dev 
> > 
> 

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Push Stream reset/flush howto

2018-11-15 Thread Tim Ward via osgi-dev
The correct option will depend on what you want to happen. If you use an 
endOfStream() or close() operation then you are telling your push stream that 
the data has reached a “natural end”. This will cause the promise at the end of 
your stream to resolve normally. This may be the right thing in some cases.

In the case that you describe I agree that endOfStream() and close() don’t seem 
like the correct approach. The data hasn’t reached a natural conclusion, and in 
fact the processing so far has been invalidated. This is the perfect 
opportunity to send an error event! You can send an Exception indicating that 
the result set was not completely processed, and potentially has been 
invalidated. The default behaviour of the push stream will then be to fail the 
pipeline, however a terminal “forEachEvent” handler could still choose to do 
something useful with the information. For example it might choose to trigger 
recreation of the stream against the updated dataset!

I hope this helps,

Tim

> On 15 Nov 2018, at 09:52, Alain Picard via osgi-dev  
> wrote:
> 
> We are using a push stream to process data change notifications against a 
> cached result set. Some of those notifications can result in directly 
> applying updates to the result set, while other will force us to invalidate 
> the cached result set.
> 
> When we do a requery, we want to make sure that any subsequent event sent to 
> the push stream can be cleared and ignore. Looking at endOfStream() or 
> close() doesn't seem to be the way to go. Only solution for now is to switch 
> to a new stream, but wondering if that is the right way to do it.
> 
> Regards,
> Alain
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Circular reference with Prototype components

2018-11-14 Thread Tim Ward via osgi-dev
Hi Alain,

I note that you don’t include the setter code, nor the activation code, but at 
a guess in one of these two places you are calling getService on the 
ComponentServiceObjects. This will in turn cause the ZKRenderer service 
instance to be created. If this instance also has an injected reference to the 
current service (i.e. the one you’re asking about in this email) then that’s a 
cycle. In more complex scenarios there may be other services in between, but 
fundamentally it’s A -> B -> A. The way around this is to either:

Remove the cycle entirely
Make it so that the dependency in one part can be optionally satisfied later
Avoid getting an instance from the ComponentServiceObjects until *after* your 
activate is called.

I hope this makes sense,

Tim


> On 14 Nov 2018, at 13:59, Alain Picard via osgi-dev  
> wrote:
> 
> Facing an issue here. I have a component registering service as such:
> @Reference(
> cardinality=ReferenceCardinality.MULTIPLE, 
> policy=ReferencePolicy.DYNAMIC, 
> scope=PROTOTYPE_REQUIRED, 
> target=ZKRenderer.CONFIG_TARGET
> )
> private void addRenderer(ComponentServiceObjects> factory, 
> Map props) { ...}
> 
> where target is: "(|(iris.zkRenderer.dynamicTester=*) 
> (iris.zkRenderer.staticTester.element=*))"
> to capture only instances that have at least one of those properties.
> 
> I also have other component that defines services under a subinterface of 
> ZKRenderer, but they don't match the filter.
> 
> But when they get instantiated via componentServiceObject.getService(); it 
> leads to: 
> !MESSAGE Circular reference detected trying to get service 
> {com.castortech.iris.ecp.view.spi.core.zk.ZKRendererFactory}={service.id 
> =502, service.bundleid=418, service.scope=bundle, 
> component.name 
> =com.castortech.iris.ecp.view.internal.zk.ZKRendererFactoryImpl,
>  iris.zkRenderer.debug=false, component.id =1066}
>  stack of references: 
> ServiceReference: (5 or 6 of those)
> 
> debugging I'm finding that it goes through the DependencyManager and seems to 
> want to register the service against the above service reference. I thought 
> that it was due to the fact that this was a subinterface and added the 
> filter, but it doesn't seem to change anything and I'm getting at a lost to 
> figure this one out.
> 
> Thanks
> Alain
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Get Bundle for package and version

2018-11-12 Thread Tim Ward via osgi-dev
This all sounds horrible - why do you have a whiteboard that is making 
reflective calls on remote service objects? If you *really* want to make this 
work then your best option would be to have the whiteboard implementation 
listen for remote service discovery events and to create a local bundle on the 
fly to consume each remote service. You can set the package imports correctly 
for each bundle, and have it expose the service object through another service 
inside the bundle.

It’s all pretty hacky though. Does your whiteboard really need the service 
instance (rather than just the service properties)? If the answer is yes then 
you really shouldn’t be using reflection to call it.

Tim

> On 12 Nov 2018, at 14:34, Jürgen Albert via osgi-dev  
> wrote:
> 
> I needed to load the class first, because I only know its name, package and 
> version. The RSA would load the class and it would ask my bundlecontext, to 
> do it for him... Dynamic-ImportPackage did the trick here. I would prefere to 
> establish a new bundle wiring on the fly, because I would know what package 
> in what version I would need.
> 
> Am 12/11/2018 um 14:46 schrieb Thomas Watson:
>> Can you explain why you need to know the exporter in this case?  It sounds 
>> like you have to be using reflection to call the service, in which case I 
>> would not think the wiring mattered.  If you need to know exactly the 
>> exporting bundle of the API then one approach would be to get the service 
>> object and interrogate its class/interface hierarchy until you find the API 
>> class name you are looking for and then use FrameworkUtil.getBundle(Class) 
>> method.
>> 
>> Tom
>>  
>>  
>>  
>> - Original message -
>> From: "Jürgen Albert via osgi-dev"  
>> 
>> Sent by: osgi-dev-boun...@mail.osgi.org 
>> 
>> To: osgi-dev@mail.osgi.org 
>> Cc:
>> Subject: [osgi-dev] Get Bundle for package and version
>> Date: Mon, Nov 12, 2018 6:58 AM
>>  
>> Hi,
>> 
>> I have a whiteboard, that needs to consume a remote service. Neither the
>> RSA that registers the service proxy nor my whiteboard that needs to
>> work with the interface has a wiring to the API Bundle. What the RSA
>> does tell me, is the packages and the versions of the API Interfaces.
>> Now I'm looking for a way to ge the appropriate bundle wiring. I know, I
>> can iterate over all available Bundles, and look if one exports the
>> package in the given version, but AFAIK I can't be sure if this is the
>> right package, in case it was offered by multiple bundles.
>> 
>> What would be the right approach here?
>> 
>> Thx,
>> 
>> Jürgen.
>> 
>> --
>> Jürgen Albert
>> Geschäftsführer
>> 
>> Data In Motion Consulting GmbH
>> 
>> Kahlaische Str. 4
>> 07745 Jena
>> 
>> Mobil:  0157-72521634
>> E-Mail: j.alb...@datainmotion.de 
>> Web: www.datainmotion.de 
>> 
>> XING:   https://www.xing.com/profile/Juergen_Albert5 
>> 
>> 
>> Rechtliches
>> 
>> Jena HBR 513025
>> 
>> ___
>> OSGi Developer Mail List
>> osgi-dev@mail.osgi.org 
>> https://mail.osgi.org/mailman/listinfo/osgi-dev 
>> 
>>  
>> 
> 
> -- 
> Jürgen Albert
> Geschäftsführer
> 
> Data In Motion Consulting GmbH
> 
> Kahlaische Str. 4
> 07745 Jena
> 
> Mobil:  0157-72521634
> E-Mail: j.alb...@datainmotion.de 
> Web: www.datainmotion.de 
> 
> XING:   https://www.xing.com/profile/Juergen_Albert5 
> 
> 
> Rechtliches
> 
> Jena HBR 513025
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Is it possible to use an interface type as the config argument of an activate method ?

2018-11-07 Thread Tim Ward via osgi-dev
The @AttributeDefinition is a build time annotation, not a runtime annotation. 
There is therefore no visibility of the annotation values for DS to use at 
runtime. This also avoids your bundle from being coupled to the meta type API 
just because it uses the annotations.

Best Regards,

Tim

> On 7 Nov 2018, at 11:10, João Assunção via osgi-dev  
> wrote:
> 
> Hi Carsten,
> 
> Thank you for the clarification and the tip.
> I assumed the framework would use the defaults provided in 
> @AttributeDefinition when using an interface for configuration.
> 
> Regards,
> João Assunção
> 
> Email: joao.assun...@exploitsys.com 
> Mobile: +351 916968984
> Phone: +351 211933149
> Web: www.exploitsys.com 
> 
> 
> 
> 
> On Wed, Nov 7, 2018 at 11:00 AM Carsten Ziegeler  > wrote:
> Hi,
> 
> no interfaces are not supported for configuration, only annotations. 
> Main reason is the support of default values for configuration properties.
> 
> But you can pass in more arguments into the activate method, so instead 
> of having a base interface C and lets say two configuration interface C1 
> and C2 inheriting from C, you specify three annotations C, C1 and C2 
> where C1 and C2 only have the additional properties.
> In your activate method you can then have two arguments C and C1 for one 
> component and C and C2 for the other component.
> 
> Regards
> Carsten
> 
> Am 07.11.2018 um 11:53 schrieb João Assunção via osgi-dev:
> > Hello all,
> > 
> > I have two components where the configuration shares a couple of 
> > attributes. To avoid duplication, and because Java doesn't allow 
> > annotations to be extended, I changed the configuration annotations to 
> > interfaces.
> > When building, bnd-maven-plugin fails with the following error message:
> > 
> > Non annotation argument to lifecycle method with descriptor 
> > 
> > I checked the specs and @ObjectClassDefinition can be applied to an 
> > interface type.
> > 
> > Thank you
> > João
> > 
> > Email: joao.assun...@exploitsys.com  
> > >
> > Mobile: +351 916968984
> > Phone: +351 211933149
> > Web: www.exploitsys.com  
> > >
> > 
> > 
> > 
> > ___
> > OSGi Developer Mail List
> > osgi-dev@mail.osgi.org 
> > https://mail.osgi.org/mailman/listinfo/osgi-dev 
> > 
> > 
> 
> -- 
> Carsten Ziegeler
> Adobe Research Switzerland
> cziege...@apache.org 
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Converting CompletableFuture to Promise

2018-10-29 Thread Tim Ward via osgi-dev
Hi Alain,

Unsurprisingly this isn’t very hard to do:

private PromiseFactory pf;

public  Promise toPromise(CompletionStage cs) {
Deferred deferred = pf.deferred();
cs.whenComplete((r,e) -> {
if(e == null) {
deferred.resolve(r);
} else {
deferred.fail(e);
}
});
return deferred.getPromise();
}

public  Promise toPromise(CompletableFuture cf) {
if(cf.isDone() && !cf.isCompletedExceptionally()) {
return pf.resolved(cf.getNow(null));
} else {
return toPromise((CompletionStage) cf);
}
}

Note that the CompletableFuture version is just a way to optimise when the 
Completable Future is already successfully resolved (the API for consuming 
failures is so bad that it’s not worth trying to optimise the already failed 
case).

Best Regards,

Tim

> On 28 Oct 2018, at 15:41, Alain Picard via osgi-dev  
> wrote:
> 
> We are now using Promises all over the place, but we are finding ourselves 
> using a library that uses CompletableFuture and want our service based on 
> that library to convert those futures into promises.
> 
> Has anyone done this before? While I can surely find a way of doing it, I 
> would like to get some best practice advice from the experts.
> 
> Cheers,
> Alain
> 
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Pushstream Example not compileable

2018-10-26 Thread Tim Ward via osgi-dev
You might find this easier to follow:

public PromiseprintEvens() {

PushStreamProvider psp = new PushStreamProvider();

SimplePushEventSource ses = 
psp.createSimpleEventSource(Long.class);

// Begin delivery when someone is listening
ses.connectPromise().then(onConnect(ses));

// Create a listener which prints out even numbers
return psp.createStream(ses).
  filter(l -> l % 2L == 0).
  limit(5000L).
  forEach(f -> System.out.println("Consumed event: " + f));
}

private Success onConnect(SimplePushEventSource ses) {
return p -> {
new Thread(() -> {
long counter = 0;
// Keep going as long as someone is listening
while (ses.isConnected()) {
  ses.publish(++counter);
  try {
Thread.sleep(100);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
  System.out.println("Published: " + counter);
}
// Restart delivery when a new listener connects
ses.connectPromise().then(onConnect(ses));
  }).start();
return null;
  };
}


> On 26 Oct 2018, at 15:52, stbischof via osgi-dev  
> wrote:
> 
> PushStreamProvider psp = new PushStreamProvider();
> 
> SimplePushEventSource ses = psp.createSimpleEventSource(Long.class))
> 
> Success onConnect = p -> {
> new Thread(() -> {
> long counter = 0;
> // Keep going as long as someone is listening
> while (ses.isConnected()) {
>   ses.publish(++counter);
>   Thread.sleep(100);
>   System.out.println("Published: " + counter);
> }
> // Restart delivery when a new listener connects
> ses.connectPromise().then(onConnect);
>   }).start();
> return null;
>   };
> 
> // Begin delivery when someone is listening
> ses.connectPromise().then(onConnect);
> 
> // Create a listener which prints out even numbers
> psp.createStream(ses).
>   filter(l -> l % 2L == 0).
>   limit(5000L).
> 
>   forEach(f -> System.out.println("Consumed event: " + f));

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Servlet Context in OSGi

2018-10-10 Thread Tim Ward via osgi-dev
It’s also how the Aries JAX-RS whiteboard guarantees separation between the 
various whiteboard applications. Specifically one JAX-RS application shouldn’t 
impact another (for example via session state being visible).

Tim

> On 10 Oct 2018, at 14:34, Raymond Auge via osgi-dev  
> wrote:
> 
> On top of this design we've been able to model full blown WAB support, each 
> WAR get's it's own "servlet context" against which every other 
> resource/servlet/filter in the WAR is targeted.
> 
> Another common case is re-use of exactly the same pre-built servlet & filter 
> based features with different configurations, for instance N different JSF 
> applications.
> 
> - Ray
> 
> On Wed, Oct 10, 2018 at 4:04 AM David Leangen via osgi-dev 
> mailto:osgi-dev@mail.osgi.org>> wrote:
> 
> Ok, fair enough. Thanks for these thoughts.
> 
> Cheers,
> =David
> 
> 
> 
> > On Oct 10, 2018, at 16:11, Tim Ward  > > wrote:
> > 
> > It also provides a way to have separate user sessions (useful), different 
> > security configurations (useful), management of static resource mappings 
> > (useful), isolation of redirection to named servlets (less useful) and I’m 
> > sure a bunch of other things.
> > 
> > Tim
> > 
> > Sent from my iPhone
> > 
> >> On 9 Oct 2018, at 22:48, David Leangen via osgi-dev 
> >> mailto:osgi-dev@mail.osgi.org>> wrote:
> >> 
> >> 
> >> Hi!
> >> 
> >> From what I understand, ServletContext is not really thought about much in 
> >> a non-OSGi application because there is basically one ServletContext per 
> >> app. I never really gave it much thought before.
> >> 
> >> In OSGi, we have more flexibility.
> >> 
> >> So my question: when should I consider using a ServletContext other than 
> >> the default context? I suspect that it could be useful as a cognitive 
> >> division, but that’s about the only use I can see. And the advantage is 
> >> not that great because users don’t see any difference at all, as far as I 
> >> can tell.
> >> 
> >> 
> >> Any thoughts?
> >> 
> >> 
> >> Cheers,
> >> =David
> >> 
> >> 
> >> ___
> >> OSGi Developer Mail List
> >> osgi-dev@mail.osgi.org 
> >> https://mail.osgi.org/mailman/listinfo/osgi-dev 
> >> 
> 
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org 
> https://mail.osgi.org/mailman/listinfo/osgi-dev 
> 
> 
> -- 
> Raymond Augé  (@rotty3000)
> Senior Software Architect Liferay, Inc.  (@Liferay)
> Board Member & EEG Co-Chair, OSGi Alliance  (@OSGiAlliance)
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Servlet Context in OSGi

2018-10-10 Thread Tim Ward via osgi-dev
It also provides a way to have separate user sessions (useful), different 
security configurations (useful), management of static resource mappings 
(useful), isolation of redirection to named servlets (less useful) and I’m sure 
a bunch of other things.

Tim

Sent from my iPhone

> On 9 Oct 2018, at 22:48, David Leangen via osgi-dev  
> wrote:
> 
> 
> Hi!
> 
> From what I understand, ServletContext is not really thought about much in a 
> non-OSGi application because there is basically one ServletContext per app. I 
> never really gave it much thought before.
> 
> In OSGi, we have more flexibility.
> 
> So my question: when should I consider using a ServletContext other than the 
> default context? I suspect that it could be useful as a cognitive division, 
> but that’s about the only use I can see. And the advantage is not that great 
> because users don’t see any difference at all, as far as I can tell.
> 
> 
> Any thoughts?
> 
> 
> Cheers,
> =David
> 
> 
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev
___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

[osgi-dev] OSGi enRoute updates

2018-10-09 Thread Tim Ward via osgi-dev
Hi all,

Just a quick note to let you know that there have been some minor updates to 
the OSGi enRoute archetypes and indexes.

The archetypes now let you select the target Java version for resolving/running 
your application. This makes life easier when using Java 9/10/11.
The RI index has been updated with new releases to pick up bug fixes ahead of 
the final R7 release
We’re now using the 4.1.0-SNAPSHOT versions of the various bnd build plugins. 
This provides full support for DS 1.4, and support for Java 11 as a target Java 
version.

The OSGi enRoute website  has been updated to 
reflect these changes.

Best Regards,

Tim Ward___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Need advice on design pattern using Promises

2018-10-05 Thread Tim Ward via osgi-dev
I will start by saying that the original is a fundamentally bad API design. It 
exposes side effects of the method (namely modifying the passed in Set) 
which should not ever be part of a sensible API contract. This method could 
return a Set (or a Promise>) indicating what it did, but 
actually that doesn’t seem to be needed - there’s really only one node to 
return each time. As a result it’s much cleaner and simpler to do the set 
addition and gathering outside this method.

  // Using a PromiseFactory is better as it gives more
  // control of threading. You’re already using 1.1
  PromiseFactory pf = new PromiseFactory();

  public Promise handleNodeAddition(Object element) {

Promise p;

Node node = diagram.getNode(element);
if (node == null) {
  p = actionProvider.createNode(element, diagram);
  addedNodes.add(node);
} else {
  p = pf.resolved(node);
}

// Using thenAccept means that you return a promise which resolves
// *after* the synchronize. If you use onSuccess then the returned
// promise will resolve *before* the synchronize and you may not
// see the result of the synchronize in some of your other callbacks

return p.thenAccept(actionProvider::synchronize);
  } 

The set gathering should then be done elsewhere, and without side-effects.

  
  public Promise> doNodeAdditions(List elements) {

List> promises = new ArrayList<>(elements.size());

Promise previous = pf.resolved(null);

for(Object o : elements) {
previous = previous.flatMap(x -> handleNodeAddition(o));
promises.add(previous);
}

return pf.all(promises)
 .map(HashSet::new);

// You could also use this as the promises are all in a chain
//  return previous.map(x -> promises.stream()
//   .map(Promise::getValue)
//   .collect(Collectors.toSet()));
  } 

I hope that this helps

Best Regards,

Tim

> On 5 Oct 2018, at 00:28, Olivier Labrosse via osgi-dev 
>  wrote:
> 
> Hi,
> 
> I'm dealing with code refactoring from a synchronous system to an 
> asynchronous one, using OSGi Promises.  One pattern we have goes as follows:
> 
>   public void handleNodeAddition(Object element, Set addedNodes) {
> Node node = diagram.getNode(element);
> if (node == null) {
>   node = actionProvider.createNode(element, diagram);
>   addedNodes.add(node);
> }
> actionProvider.synchronize(node);
>   }
> 
> The problem I'm facing is that actionProvider.createNode() now returns a 
> Promise due to asynchronous execution.  This means we can no longer 
> just add nodes to the Set, but not only this, we have to make sure that each 
> createNode() call from this thread happens after the previous one is resolved.
> 
> Would there be a best practice for this kind of process?  If I were to keep 
> the pattern as-is but implement support for asynchronous node creation, 
> here's how I would do it:
> 
>   public void handleNodeAddition(Object element,
>   AtomicReference>> addedNodes) {
> Promises.resolved(diagram.getNode(element))
> .then(existingNode -> {
>   Node node = existingNode.getValue();
>   Promise nodePromise;
>   
>   if (node == null) {
> // Using an AtomicReference so the Promise chain can be updated
> Promise> addedNodesPromise = addedNodesPromiseRef.get();
> 
> nodePromise = addedNodesPromise // wait for previous node to be 
> added
> .then(previousNodeAdded -> actionProvider.createNode(element, 
> diagram))
> .onSuccess(createdNode -> createdNode.setLocation(location));
> 
> addedNodesPromiseRef.set(createNodePromise
> .then(createdNode -> {
>   addedNodesPromise.getValue().add(createdNode.getValue());
>   return addedNodesPromise; // still holds the Set
> })
> );
>   }
>   else {
> nodePromise = Promises.resolved(node);
>   }
>   
>   return nodePromise;
> })
> .onSuccess(nodeToSync -> actionProvider.synchronize(nodeToSync));
>   }
> 
> Any and all advice is much appreciated, thank you!
> 
> -Olivier Labrosse
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] configurationPid vs factory identifier

2018-09-28 Thread Tim Ward via osgi-dev
These are absolutely not the same things! 



You always want this one. In ten years I have used this hundreds of times, I 
have never used the other setup.

> @Component(
> configurationPid = “com.my.component”)
> @Designate(
> factory = true,
> ocd   = MyConfig.class)
> public class MyComponent
>implements MyComponentInterface
> {….}


 


The factory element of the @Component annotation has nothing to do with factory 
configurations from configuration admin. See the java doc for details 
.
 I strongly doubt that you really want to declare your component as a factory. 
This is an uncommon way to control the lifecycle of a DS component, and it is 
almost always set in a mistaken attempt to indicate that you want to use the 
muliton pattern. The important thing from the perspective of the multiton 
pattern is your configurationPolicy. 

In summary:

> @Component(
> configurationPid = “com.my.component”)

This means "Use this PID when configuring my component”

> @Component(
> configurationPid = “com.my.component”, configurationPolicy=REQUIRE)

This means “Use this PID when configuring my component, and require 
configuration before it is activated. This also enables the multiton pattern.

> @Component(
> factory = “com.my.component”)

This means “Register a DS ComponentFactory service with this factory name so 
that I can use the DS API to create instances programatically”.

—— 

Now the @Designate annotation is part of the Metatype spec, and it does have an 
interesting interaction with the @Component annotation.

Specifically:

> @Component
> @Designate(
> factory = false, // the default
> ocd   = MyConfig.class)

This component will have a configuration policy of OPTIONAL

> @Component
> @Designate(
> factory = true,
> ocd   = MyConfig.class)


This component will have a configuration policy of REQUIRE.

This is why you can get away without specifying a configuration policy when 
using the @Designate annotation. Again, the java doc for configurationPolicy 

 is helpful.

Tim


> On 27 Sep 2018, at 21:49, Leschke, Scott via osgi-dev 
>  wrote:
> 
> Just to be clear, I thought this:
>  
> @Component(
> configurationPid = “com.my.component”)
> @Designate(
> factory = true,
> ocd   = MyConfig.class)
> public class MyComponent
>implements MyComponentInterface
> {….}
>  
> Was equivalent to this:
>  
> @Component(
> factory = “com.my.component”)
> @Designate(
> ocd  = MyConfig.class)
> public class MyComponent
>implements MyComponentInterface
> {….}
>  
> Is that not the case?  It doesn’t appear to be based on my experiment. I 
> didn’t find the API docs helpful on this.
>  
> Regards,
>  
> Scott Leschke
>  
> From: osgi-dev-boun...@mail.osgi.org  On 
> Behalf Of Leschke, Scott via osgi-dev
> Sent: Thursday, September 27, 2018 10:52 AM
> To: osgi-dev@mail.osgi.org
> Subject: [osgi-dev] configurationPid vs factory identifier
>  
> How is the factory Identifier to be used in conjunction with 
> configurationPid.  I thought a factory identifier was to take the place of 
> configurationPid + factory = true but that doesn’t appear to be the case.
>  
> Scott 
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Integrating promises

2018-09-19 Thread Tim Ward via osgi-dev
It looks like it should be pretty simple…

Promise myPromise = getPromise();

myPromise.onSuccess(listener::onResponse)
.onFailure(listener::onFailure);

Best Regards,

Tim

> On 19 Sep 2018, at 15:16, Alain Picard via osgi-dev  
> wrote:
> 
> We are using ElasticSearch which provide an async mode that is heavily based 
> on promises, They even provide BiConsumer to integrate with CompletableFuture.
> 
> The interface is ActionListener 
> (https://github.com/elastic/elasticsearch/blob/master/server/src/main/java/org/elasticsearch/action/ActionListener.java
>  
> ).
> 
> What is the best way to tie this is to promises instead, so that we don't 
> have to deal with different mode of handling asynchronous processing, and we 
> are also envisioning the possibility of integrating push streams here as well.
> 
> Thanks
> Alain
> 
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Retrieving service configuration implementation in reference method

2018-09-03 Thread Tim Ward via osgi-dev


> On 3 Sep 2018, at 12:46, Alain Picard  wrote:
> 
> Tim,
> 
> Regarding point 1, I feel like I dropped the ball and it makes sense. 

Not really, there genuinely is a bug in bnd 4.0, and it was only fixed in the 
4.1 development stream recently (within the last week).

> 
> Regarding point 2, I learned something about the use of converters to manage 
> component property types (and not to use them anywhere). So I read in chapter 
> 707 to find out how to do it and now I have an issue with my Class method, 
> which throws an exception. The priority which is an integer works fine.
> 
> Code:
> @Reference
> private void addRenderer(ConfiguredComponent scc, Map 
> props) {
> this.scc = scc;
> Converter converter = Converters.standardConverter();
> RendererConfig config = 
> converter.convert(props).to(RendererConfig.class);
> 
> System.out.println("Adding renderer with Tester priority:" + 
> config.tester_priority() + "full props" + props);
> System.out.println("Tester:" + config.tester_class());
> }
> 
> And the exception is:
> java.lang.NoClassDefFoundError: comp.property.test.RendererFactory
> at 
> org.osgi.util.converter.ConverterImpl.loadClassUnchecked(ConverterImpl.java:352)
> at org.osgi.util.converter.ConverterImpl$19.apply(ConverterImpl.java:157)
> at org.osgi.util.converter.ConverterImpl$19.apply(ConverterImpl.java:154)
> at org.osgi.util.converter.Rule$1.apply(Rule.java:67)
> at 
> org.osgi.util.converter.CustomConverterImpl$ConvertingWrapper.to(CustomConverterImpl.java:162)
> at 
> org.osgi.util.converter.ConvertingImpl$4.invoke(ConvertingImpl.java:809)
> at com.sun.proxy.$Proxy3.tester_class(Unknown Source)
> at comp.property.test.RendererFactory.addRenderer(RendererFactory.java:23)
> 
> And the "swallowed exception is:
> java.lang.ClassNotFoundException: comp.property.test.RendererFactory not 
> found by org.osgi.util.converter [6]
> org.apache.felix.framework.BundleWiringImpl.findClassOrResourceByDelegation(BundleWiringImpl.java:1597)
> org.apache.felix.framework.BundleWiringImpl.access$300(BundleWiringImpl.java:79)
> org.apache.felix.framework.BundleWiringImpl$BundleClassLoader.loadClass(BundleWiringImpl.java:1982)
> java.lang.ClassLoader.loadClass(Unknown Source)
> org.osgi.util.converter.ConverterImpl.loadClassUnchecked(ConverterImpl.java:350)
> 
> any idea or is this is bug or a what.

This is not a bug. Classes are “difficult” types as they cannot be easily 
loaded without knowing the module that should provide them. This is one of the 
many reasons that class names are discouraged throughout OSGi. In this case the 
service property that you’re trying to convert is a String class name, and the 
Converter is trying to load that class using its own class loader (it doesn’t 
have any other options). In Java EE they use the Thread Context ClassLoader as 
a hack to get around this, setting that to the “correct” ClassLoader to use 
when trying to load types from the application in server runtime code.

You have several options to work around this.

Your best option is not to use a class name as a service property at all, and 
avoid using the Class object. Reflectively loading classes is the beginning of 
much pain in OSGi, and using a different solution usually results in much 
better overall modularity.
If you really do need access to the Class (why?) then return it from a method 
on the service so that it can be loaded by a bundle who knows how and where to 
load it from. This still leaves you with potential issues, but should be 
workable
Add a custom rule to your converter instance that uses the “correct” bundle 
class loader to load the class. This may be your own (potential for missing 
imports) the bundle that registered the service (potential for missing uses) or 
some other bundle (good luck finding it)!.

Best Regards,

Tim

> 
> Thanks
> Alain
> 
> 
> On Mon, Sep 3, 2018 at 6:12 AM Tim Ward  > wrote:
> 
> Problem 1:
> 
> The Relevant bitts of the specification are:
> 
> How component properties override each other at runtime:
> https://osgi.org/specification/osgi.cmpn/7.0.0/service.component.html#service.component-component.properties
>  
> 
> 
> How component properties override each other at build time:
> https://osgi.org/specification/osgi.cmpn/7.0.0/service.component.html#service.component-ordering.generated.properties
>  
> 
> 
> The sum total of this is that the component properties from the annotation 
> that you’ve applied to your component class should come *after* the ones from 
> the activate method. There was a very recent fix in Bnd 
>  to make sure that this was done 
> correctly.
> 
> 
> 

Re: [osgi-dev] Retrieving service configuration implementation in reference method

2018-09-03 Thread Tim Ward via osgi-dev

Problem 1:

The Relevant bitts of the specification are:

How component properties override each other at runtime:
https://osgi.org/specification/osgi.cmpn/7.0.0/service.component.html#service.component-component.properties
 


How component properties override each other at build time:
https://osgi.org/specification/osgi.cmpn/7.0.0/service.component.html#service.component-ordering.generated.properties
 


The sum total of this is that the component properties from the annotation that 
you’ve applied to your component class should come *after* the ones from the 
activate method. There was a very recent fix in Bnd 
 to make sure that this was done 
correctly.


Problem 2:

As for your additional issue - A component property type is not a valid input 
for method injection with references. See 
https://osgi.org/specification/osgi.cmpn/7.0.0/service.component.html#service.component-method.injection
 


You can use the OSGi converter to convert an injected map of properties into an 
instance the annotation if you want.

Best Regards,

Tim

> On 25 Aug 2018, at 12:37, Alain Picard  wrote:
> 
> I had an idea to try using @ComponentPropertyType and searched and found that 
> while not specifically covered by the documentation, in the cpmn there is at 
> least one case of a Class property in ExportedService and that the code seems 
> to support it (ComponentPropertyTypeDataCollector#valueToProperty)
> 
> So I went ahead and tried it and it works, but I'm having a more general 
> issue with the ComponentProperty type. I am attaching the test project.
> 
> The encountered issues are twofold. First what is happening is that the 
> values supplied to the annotation are seen in the component, are seen in the 
> reference method of the using service, but not in its activate method, where 
> I get the default values from the annotation. Second, my service that 
> references the one annotated with my component annotation won't start if I 
> use a method signature that references the annotation type instead of using a 
> map. 
> 
> Please enlighten me.
> 
> Alain
> 
> 
> On Fri, Aug 24, 2018 at 5:52 AM Tim Ward  > wrote:
> Right, so in this case it looks like you’re running a whiteboard, is it 
> possible you would be better off not using the service properties for this 
> filtering? For example:
> 
> @Reference(policy=DYNAMIC, cardinality=MULTIPLE)
> private final List renderers = new CopyOnWriteArrayList<>();
> 
> public ZKRenderer getRendererFor(Object o) {
> return renderers.stream()
> .filter(r -> r.supports(o))
> .collect(Collectors.maxBy((a,b) -> 
> a.getPriority(o).compareTo(b.getPriority(o
> .orElseThrow(() -> new IllegalArgumentException("No renderer for 
> object " + o));
> }
> 
> Tim
> 
>> On 24 Aug 2018, at 10:34, Alain Picard > > wrote:
>> 
>> They represent classes, which is why I would have like to have a Class 
>> annotation so I could do "tester=MyTester.class". instead of 
>> "tester="com.acme.mypkg.MyTester". 
>> 
>> For example I have a number of components implementing a service and as part 
>> of their property they define their "filter conditions" which are then 
>> passed on to the 3rd party library, and there are 2 types of testers, etc:
>> Component(service=ZKRenderer.class, factory=ZKRenderer.CONFIG_FACTORY,
>>   property= { ZKRenderer.CONFIG_STATIC_TEST + "=c.c.i.tester.ReferenceTree", 
>>   ZKRenderer.CONFIG_STATIC_TEST_PRIORITY + ":Integer=9" })
>> 
>> If I move my ReferenceTree tester in the above case, no compiler would catch 
>> it and I'm just looking for pain in the future. 
>> 
>> I am not sure I grasp your approach. Here clients just ask for a renderer 
>> (an instance of the service) for some "object" that is passed in and an 
>> appropriate and "highest ranking" one is returned. So the client is never 
>> specifying the class string at all. Here we are providing the full class 
>> name so it can be loaded, hence it would be much more natural to provide a 
>> Class object. 
>> 
>> When we have cases where the component and reference must have to match we 
>> do as such:
>> public static final String CONFIG_QUALIFIER = 
>> OsgiConstants.SERVICE_QUALIFIER + "=ReferenceList"; //$NON-NLS-1$
>> public static final String CONFIG_TARGET = "(" + CONFIG_QUALIFIER + ")"; 
>> //$NON-NLS-1$ //$NON-NLS-2$
>> 
>> and here the component use the 1st line in its property and the reference 
>> target uses the 2nd constant and that is not an issue.
>> 
>> Alain
>> 
>> 
>> 
>> Alain Picard
>> Chief Strategy Officer
>> 

Re: [osgi-dev] Retrieving service configuration implementation in reference method

2018-08-24 Thread Tim Ward via osgi-dev
Right, so in this case it looks like you’re running a whiteboard, is it 
possible you would be better off not using the service properties for this 
filtering? For example:

@Reference(policy=DYNAMIC, cardinality=MULTIPLE)
private final List renderers = new CopyOnWriteArrayList<>();

public ZKRenderer getRendererFor(Object o) {
return renderers.stream()
.filter(r -> r.supports(o))
.collect(Collectors.maxBy((a,b) -> 
a.getPriority(o).compareTo(b.getPriority(o
.orElseThrow(() -> new IllegalArgumentException("No renderer for object 
" + o));
}

Tim

> On 24 Aug 2018, at 10:34, Alain Picard  wrote:
> 
> They represent classes, which is why I would have like to have a Class 
> annotation so I could do "tester=MyTester.class". instead of 
> "tester="com.acme.mypkg.MyTester". 
> 
> For example I have a number of components implementing a service and as part 
> of their property they define their "filter conditions" which are then passed 
> on to the 3rd party library, and there are 2 types of testers, etc:
> Component(service=ZKRenderer.class, factory=ZKRenderer.CONFIG_FACTORY,
>   property= { ZKRenderer.CONFIG_STATIC_TEST + "=c.c.i.tester.ReferenceTree", 
>   ZKRenderer.CONFIG_STATIC_TEST_PRIORITY + ":Integer=9" })
> 
> If I move my ReferenceTree tester in the above case, no compiler would catch 
> it and I'm just looking for pain in the future. 
> 
> I am not sure I grasp your approach. Here clients just ask for a renderer (an 
> instance of the service) for some "object" that is passed in and an 
> appropriate and "highest ranking" one is returned. So the client is never 
> specifying the class string at all. Here we are providing the full class name 
> so it can be loaded, hence it would be much more natural to provide a Class 
> object. 
> 
> When we have cases where the component and reference must have to match we do 
> as such:
> public static final String CONFIG_QUALIFIER = 
> OsgiConstants.SERVICE_QUALIFIER + "=ReferenceList"; //$NON-NLS-1$
> public static final String CONFIG_TARGET = "(" + CONFIG_QUALIFIER + ")"; 
> //$NON-NLS-1$ //$NON-NLS-2$
> 
> and here the component use the 1st line in its property and the reference 
> target uses the 2nd constant and that is not an issue.
> 
> Alain
> 
> 
> 
> Alain Picard
> Chief Strategy Officer
> Castor Technologies Inc
> o:514-360-7208
> m:813-787-3424
> 
> pic...@castortech.com 
> www.castortech.com 
> 
> On Fri, Aug 24, 2018 at 5:16 AM Tim Ward  > wrote:
> Do these properties “represent” classes or are they actually classes? If they 
> are just representations (which would be a good thing) then you can define a 
> static string constant representing the class which is mapped internally to 
> the correct class name (which can then change over time). Clients then filter 
> based on the string representation which will not change.
> 
> Tim
> 
> 
>> On 24 Aug 2018, at 10:07, Alain Picard > > wrote:
>> 
>> Tim & all,
>> 
>> My immediate use case is that my components have some properties and some of 
>> those represent classes (this interfaces with 3rd party libraries, I would 
>> probably design it differently if I could, but it has to be configuration as 
>> it is used to determine if the component is a match, much like for target 
>> filters). Properties in the component annotation are String[] and that 
>> forces the specification of classes as String which is very bad since if the 
>> class is moved, renamed, deleted, etc, it will cause no error or warning and 
>> blow up later on. And since annotations only support compile time constants, 
>> you can't do a MyClass.class.getName() to even get a String. My idea was 
>> since the implementation class is part of the component description, if I 
>> could get a hold of it, to have a static method in the class to provide this 
>> "constant".
>> 
>> How can I work around the limitations of Properties as String and Java 
>> compile time constants. Am I stuck to introduce a new separate annotation to 
>> track this configuration?
>> 
>> Alain
>> 
>> Alain
>> 
>> 
>> On Thu, Aug 23, 2018 at 5:24 AM Tim Ward > > wrote:
>> The properties visible in the Map (or ServiceReference) are the service 
>> properties. There is some overlap with configuration (services that are 
>> configurable are encouraged to publish configuration properties as service 
>> properties) but they are separate, and can be different.
>> 
>> The only way that something becomes a service property is if it is 
>> deliberately registered as such or, for a few specific properties such as 
>> service.id  and service.scope, added automatically by 
>> the framework. 
>> 
>> The class name of the implementation would only be added as a service 
>> property if done so deliberately, and this is typically discouraged (it 
>> leaks internal 

Re: [osgi-dev] Retrieving service configuration implementation in reference method

2018-08-24 Thread Tim Ward via osgi-dev
Do these properties “represent” classes or are they actually classes? If they 
are just representations (which would be a good thing) then you can define a 
static string constant representing the class which is mapped internally to the 
correct class name (which can then change over time). Clients then filter based 
on the string representation which will not change.

Tim


> On 24 Aug 2018, at 10:07, Alain Picard  wrote:
> 
> Tim & all,
> 
> My immediate use case is that my components have some properties and some of 
> those represent classes (this interfaces with 3rd party libraries, I would 
> probably design it differently if I could, but it has to be configuration as 
> it is used to determine if the component is a match, much like for target 
> filters). Properties in the component annotation are String[] and that forces 
> the specification of classes as String which is very bad since if the class 
> is moved, renamed, deleted, etc, it will cause no error or warning and blow 
> up later on. And since annotations only support compile time constants, you 
> can't do a MyClass.class.getName() to even get a String. My idea was since 
> the implementation class is part of the component description, if I could get 
> a hold of it, to have a static method in the class to provide this "constant".
> 
> How can I work around the limitations of Properties as String and Java 
> compile time constants. Am I stuck to introduce a new separate annotation to 
> track this configuration?
> 
> Alain
> 
> Alain
> 
> 
> On Thu, Aug 23, 2018 at 5:24 AM Tim Ward  > wrote:
> The properties visible in the Map (or ServiceReference) are the service 
> properties. There is some overlap with configuration (services that are 
> configurable are encouraged to publish configuration properties as service 
> properties) but they are separate, and can be different.
> 
> The only way that something becomes a service property is if it is 
> deliberately registered as such or, for a few specific properties such as 
> service.id  and service.scope, added automatically by the 
> framework. 
> 
> The class name of the implementation would only be added as a service 
> property if done so deliberately, and this is typically discouraged (it leaks 
> internal implementation detail and forces your internal naming to become 
> API). If you *really* care about the details of a service (and in general you 
> shouldn’t) then you should mark it with a service property that you can 
> recognise. Ideally one that is separate from the other implementation details 
> of the service.
> 
> Best Regards,
> 
> Tim
> 
> > On 22 Aug 2018, at 16:53, Alain Picard via osgi-dev  > > wrote:
> > 
> > In a reference method, i can get the property configuration of the service 
> > along with the ComponentFactory and some other optional arguments. Can any 
> > of those give me a way to retrieve the implementation from the 
> > configuration (i.e. the class name of the implementation) ?
> > 
> > Thanks
> > Alain
> > 
> > ___
> > OSGi Developer Mail List
> > osgi-dev@mail.osgi.org 
> > https://mail.osgi.org/mailman/listinfo/osgi-dev 
> > 
> 

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Disposing component instances

2018-08-23 Thread Tim Ward via osgi-dev
t; On 22 Aug 2018, at 12:06, Alain Picard >> <mailto:pic...@castortech.com>> wrote:
>>> 
>>> On the 2nd part of the question regarding 
>>> ComponentFactory/ComponentInstance vs Prototype/ComponentServiceObjects. I 
>>> get the feeling that CSO should be favored, but I saw an old post from 
>>> Scott Lewis about configuration and that is a bit close to some of my use 
>>> cases.
>>> 
>>> I have cases where I have a Factory component that delivers instances and 
>>> calls an init method to configure the component, or might sometimes return 
>>> an existing matching one that is already cached (like per data connection 
>>> instances). With ComponentFactory I can create a new instance, call init on 
>>> the new instance and return the ComponentInstance. The caller can then call 
>>> getInstance and call dispose when done. I struggle to find a correct/easy 
>>> way to do this with CSO. Am I using the best approach or not?
>>> 
>>> Thanks
>>> Alain
>>> 
>>> 
>>> On Wed, Aug 22, 2018 at 3:46 AM Tim Ward via osgi-dev 
>>> mailto:osgi-dev@mail.osgi.org>> wrote:
>>> 
>>> 
>>>> On 21 Aug 2018, at 20:53, Paul F Fraser via osgi-dev 
>>>> mailto:osgi-dev@mail.osgi.org>> wrote:
>>>> 
>>>> On 22/08/2018 5:40 AM, Paul F Fraser via osgi-dev wrote:
>>>>> On 21/08/2018 10:00 PM, Tim Ward via osgi-dev wrote:
>>>>>> Have you looked at what the OSC project does? It uses Vaadin, and uses 
>>>>>> the ViewProvider interface to provide view instances. These 
>>>>>> automatically have a detach listener added on creation so that they get 
>>>>>> correctly disposed when their parent container is closed.
>>>>>> 
>>>>>> See 
>>>>>> https://github.com/opensecuritycontroller/osc-core/blob/4441c96fe49e4b11ce6f380a440367912190a246/osc-ui/src/main/java/org/osc/core/broker/view/OSCViewProvider.java#L60-L67
>>>>>>  
>>>>>> <https://github.com/opensecuritycontroller/osc-core/blob/4441c96fe49e4b11ce6f380a440367912190a246/osc-ui/src/main/java/org/osc/core/broker/view/OSCViewProvider.java#L60-L67>
>>>>>>  for details.
>>>>>> 
>>>>>> Tim
>>>>> 
>>>>> Hi Tim,
>>>>> The R7 Spec 112.3.6 states that "SCR must unget any unreleased service 
>>>>> objects" and it sounds to me that the system is supposed to clean itself 
>>>>> up.
>>>>> What am I missing.
>>>> What am I missing?
>>>> 
>>>> Apart from a question mark.. that is.
>>> 
>>> Hi Paul,
>>> 
>>> You are correct in your interpretation of the specification, however…
>>> 
>>> This only happens if you use ComponentServiceObjects, not ServiceObjects 
>>> (which is why this type was added to the DS spec). If you use 
>>> ServiceObjects directly then SCR cannot reference count them and cannot 
>>> help you.
>>> The “leaked” instances are only cleaned up when your component is disposed 
>>> by SCR (for example if it becomes unsatisfied).
>>> 
>>> In this case we *are* using ComponentServiceObjects (good) but we need to 
>>> dispose of the referenced instance when the UI view is closed.
>>> 
>>> If we left it up to SCR to clean up, and our component wasn’t 
>>> deactivated/disposed between UI sessions then we would have a memory leak. 
>>> In general when you use ComponentServiceObjects you should think about the 
>>> lifecycle of the objects you create, and how they are going to be released. 
>>> In this case the component may get an arbitrarily large (and increasing) 
>>> number of instances over time, so it must also dispose of them. If the 
>>> example just grabbed 2 (or 5, or 10) instances at activation and used them 
>>> until deactivation then it would not be necessary to release them (SCR 
>>> would do it for us).
>>> 
>>> I hope that this makes sense,
>>> 
>>> Tim
>>> 
>>> 
>>>>> 
>>>>> Paul Fraser
>>>>> ___
>>>>> OSGi Developer Mail List
>>>>> osgi-dev@mail.osgi.org <mailto:osgi-dev@mail.osgi.org>
>>>>> https://mail.osgi.org/mailman/listinfo/osgi-dev 
>>>>> <https://mail.osgi.org/mailman/listinfo/osgi-dev>
>>>>> 
>>>> 
>>>> ___
>>>> OSGi Developer Mail List
>>>> osgi-dev@mail.osgi.org <mailto:osgi-dev@mail.osgi.org>
>>>> https://mail.osgi.org/mailman/listinfo/osgi-dev 
>>>> <https://mail.osgi.org/mailman/listinfo/osgi-dev>
>>> ___
>>> OSGi Developer Mail List
>>> osgi-dev@mail.osgi.org <mailto:osgi-dev@mail.osgi.org>
>>> https://mail.osgi.org/mailman/listinfo/osgi-dev 
>>> <https://mail.osgi.org/mailman/listinfo/osgi-dev>
> 

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Retrieving service configuration implementation in reference method

2018-08-23 Thread Tim Ward via osgi-dev
The properties visible in the Map (or ServiceReference) are the service 
properties. There is some overlap with configuration (services that are 
configurable are encouraged to publish configuration properties as service 
properties) but they are separate, and can be different.

The only way that something becomes a service property is if it is deliberately 
registered as such or, for a few specific properties such as service.id and 
service.scope, added automatically by the framework. 

The class name of the implementation would only be added as a service property 
if done so deliberately, and this is typically discouraged (it leaks internal 
implementation detail and forces your internal naming to become API). If you 
*really* care about the details of a service (and in general you shouldn’t) 
then you should mark it with a service property that you can recognise. Ideally 
one that is separate from the other implementation details of the service.

Best Regards,

Tim

> On 22 Aug 2018, at 16:53, Alain Picard via osgi-dev  
> wrote:
> 
> In a reference method, i can get the property configuration of the service 
> along with the ComponentFactory and some other optional arguments. Can any of 
> those give me a way to retrieve the implementation from the configuration 
> (i.e. the class name of the implementation) ?
> 
> Thanks
> Alain
> 
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Disposing component instances

2018-08-22 Thread Tim Ward via osgi-dev
Registering a prototype service is almost as easy as registering a singleton 
service. Instead of registering a single object you register an instance of 
PrototypeServiceFactory 
<https://osgi.org/javadoc/r6/core/org/osgi/framework/PrototypeServiceFactory.html>.
 This will get called by the framework to get and release instances as needed.

Tim

> On 22 Aug 2018, at 16:49, Alain Picard  wrote:
> 
> Tim,
> 
> This helps quite a bit and clarifies a few points for me. As someone who is 
> migrating from a pre-DS environment and dealing with lots of legacy, how can 
> prototype scoped services be used outside of DS? That would be fantastic. 
> Right now we have a good solution to use singleton services outside of DS but 
> not for "factory" type services.
> 
> Thanks
> Alain
> 
> 
> On Wed, Aug 22, 2018 at 11:27 AM Tim Ward  <mailto:tim.w...@paremus.com>> wrote:
> Hi Alain,
> 
> A "Prototype scoped" service is one where the client(s) can request an 
> arbitrary number of instances of the “same” service, whereas a 
> ComponentFactory is a mechanism for the clients to request an arbitrary 
> number of differently configured component instances.
> 
> From the perspective of the component the key difference is that all of the 
> instances of a prototype scoped component have the same component properties, 
> and the instances created by the factory component have the combination of 
> these component properties *plus* the properties passed to the factory.
> 
> In some senses prototype scoped services are better because they:
> 
> Don’t require the service implementation to use DS (they may wish to use 
> something else)
> Will have satisfied references and configurations (component factories can be 
> given configuration which invalidates the registration resulting in an error)
> 
> The main reason that you would use a Component Factory rather than a 
> prototype scoped service is if you genuinely want to have different 
> specialised configurations for each instance, and it doesn’t make sense to 
> use a managed service factory (i.e. the customised instances are only 
> interesting to one client, or must not be shared for some reason).
> 
> If your instances are identically configured (or can be, with an init later) 
> then a ComponentServiceObjects getService() call should be all you need each 
> time you need a new instance, followed by a call to ungetService() later when 
> you’re done with it.
> 
> Tim
> 
>> On 22 Aug 2018, at 12:06, Alain Picard > <mailto:pic...@castortech.com>> wrote:
>> 
>> On the 2nd part of the question regarding ComponentFactory/ComponentInstance 
>> vs Prototype/ComponentServiceObjects. I get the feeling that CSO should be 
>> favored, but I saw an old post from Scott Lewis about configuration and that 
>> is a bit close to some of my use cases.
>> 
>> I have cases where I have a Factory component that delivers instances and 
>> calls an init method to configure the component, or might sometimes return 
>> an existing matching one that is already cached (like per data connection 
>> instances). With ComponentFactory I can create a new instance, call init on 
>> the new instance and return the ComponentInstance. The caller can then call 
>> getInstance and call dispose when done. I struggle to find a correct/easy 
>> way to do this with CSO. Am I using the best approach or not?
>> 
>> Thanks
>> Alain
>> 
>> 
>> On Wed, Aug 22, 2018 at 3:46 AM Tim Ward via osgi-dev 
>> mailto:osgi-dev@mail.osgi.org>> wrote:
>> 
>> 
>>> On 21 Aug 2018, at 20:53, Paul F Fraser via osgi-dev 
>>> mailto:osgi-dev@mail.osgi.org>> wrote:
>>> 
>>> On 22/08/2018 5:40 AM, Paul F Fraser via osgi-dev wrote:
>>>> On 21/08/2018 10:00 PM, Tim Ward via osgi-dev wrote:
>>>>> Have you looked at what the OSC project does? It uses Vaadin, and uses 
>>>>> the ViewProvider interface to provide view instances. These automatically 
>>>>> have a detach listener added on creation so that they get correctly 
>>>>> disposed when their parent container is closed.
>>>>> 
>>>>> See 
>>>>> https://github.com/opensecuritycontroller/osc-core/blob/4441c96fe49e4b11ce6f380a440367912190a246/osc-ui/src/main/java/org/osc/core/broker/view/OSCViewProvider.java#L60-L67
>>>>>  
>>>>> <https://github.com/opensecuritycontroller/osc-core/blob/4441c96fe49e4b11ce6f380a440367912190a246/osc-ui/src/main/java/org/osc/core/broker/view/OSCViewProvider.java#L60-L67>
>>>>>  for details.
>>>>> 
>>>&

Re: [osgi-dev] Disposing component instances

2018-08-22 Thread Tim Ward via osgi-dev
Hi Alain,

A "Prototype scoped" service is one where the client(s) can request an 
arbitrary number of instances of the “same” service, whereas a ComponentFactory 
is a mechanism for the clients to request an arbitrary number of differently 
configured component instances.

From the perspective of the component the key difference is that all of the 
instances of a prototype scoped component have the same component properties, 
and the instances created by the factory component have the combination of 
these component properties *plus* the properties passed to the factory.

In some senses prototype scoped services are better because they:

Don’t require the service implementation to use DS (they may wish to use 
something else)
Will have satisfied references and configurations (component factories can be 
given configuration which invalidates the registration resulting in an error)

The main reason that you would use a Component Factory rather than a prototype 
scoped service is if you genuinely want to have different specialised 
configurations for each instance, and it doesn’t make sense to use a managed 
service factory (i.e. the customised instances are only interesting to one 
client, or must not be shared for some reason).

If your instances are identically configured (or can be, with an init later) 
then a ComponentServiceObjects getService() call should be all you need each 
time you need a new instance, followed by a call to ungetService() later when 
you’re done with it.

Tim

> On 22 Aug 2018, at 12:06, Alain Picard  wrote:
> 
> On the 2nd part of the question regarding ComponentFactory/ComponentInstance 
> vs Prototype/ComponentServiceObjects. I get the feeling that CSO should be 
> favored, but I saw an old post from Scott Lewis about configuration and that 
> is a bit close to some of my use cases.
> 
> I have cases where I have a Factory component that delivers instances and 
> calls an init method to configure the component, or might sometimes return an 
> existing matching one that is already cached (like per data connection 
> instances). With ComponentFactory I can create a new instance, call init on 
> the new instance and return the ComponentInstance. The caller can then call 
> getInstance and call dispose when done. I struggle to find a correct/easy way 
> to do this with CSO. Am I using the best approach or not?
> 
> Thanks
> Alain
> 
> 
> On Wed, Aug 22, 2018 at 3:46 AM Tim Ward via osgi-dev  <mailto:osgi-dev@mail.osgi.org>> wrote:
> 
> 
>> On 21 Aug 2018, at 20:53, Paul F Fraser via osgi-dev > <mailto:osgi-dev@mail.osgi.org>> wrote:
>> 
>> On 22/08/2018 5:40 AM, Paul F Fraser via osgi-dev wrote:
>>> On 21/08/2018 10:00 PM, Tim Ward via osgi-dev wrote:
>>>> Have you looked at what the OSC project does? It uses Vaadin, and uses the 
>>>> ViewProvider interface to provide view instances. These automatically have 
>>>> a detach listener added on creation so that they get correctly disposed 
>>>> when their parent container is closed.
>>>> 
>>>> See 
>>>> https://github.com/opensecuritycontroller/osc-core/blob/4441c96fe49e4b11ce6f380a440367912190a246/osc-ui/src/main/java/org/osc/core/broker/view/OSCViewProvider.java#L60-L67
>>>>  
>>>> <https://github.com/opensecuritycontroller/osc-core/blob/4441c96fe49e4b11ce6f380a440367912190a246/osc-ui/src/main/java/org/osc/core/broker/view/OSCViewProvider.java#L60-L67>
>>>>  for details.
>>>> 
>>>> Tim
>>> 
>>> Hi Tim,
>>> The R7 Spec 112.3.6 states that "SCR must unget any unreleased service 
>>> objects" and it sounds to me that the system is supposed to clean itself up.
>>> What am I missing.
>> What am I missing?
>> 
>> Apart from a question mark.. that is.
> 
> Hi Paul,
> 
> You are correct in your interpretation of the specification, however…
> 
> This only happens if you use ComponentServiceObjects, not ServiceObjects 
> (which is why this type was added to the DS spec). If you use ServiceObjects 
> directly then SCR cannot reference count them and cannot help you.
> The “leaked” instances are only cleaned up when your component is disposed by 
> SCR (for example if it becomes unsatisfied).
> 
> In this case we *are* using ComponentServiceObjects (good) but we need to 
> dispose of the referenced instance when the UI view is closed.
> 
> If we left it up to SCR to clean up, and our component wasn’t 
> deactivated/disposed between UI sessions then we would have a memory leak. In 
> general when you use ComponentServiceObjects you should think about the 
> lifecycle of the objects you create, and how they are going to be relea

Re: [osgi-dev] Disposing component instances

2018-08-22 Thread Tim Ward via osgi-dev


> On 21 Aug 2018, at 20:53, Paul F Fraser via osgi-dev  
> wrote:
> 
> On 22/08/2018 5:40 AM, Paul F Fraser via osgi-dev wrote:
>> On 21/08/2018 10:00 PM, Tim Ward via osgi-dev wrote:
>>> Have you looked at what the OSC project does? It uses Vaadin, and uses the 
>>> ViewProvider interface to provide view instances. These automatically have 
>>> a detach listener added on creation so that they get correctly disposed 
>>> when their parent container is closed.
>>> 
>>> See 
>>> https://github.com/opensecuritycontroller/osc-core/blob/4441c96fe49e4b11ce6f380a440367912190a246/osc-ui/src/main/java/org/osc/core/broker/view/OSCViewProvider.java#L60-L67
>>>  for details.
>>> 
>>> Tim
>> 
>> Hi Tim,
>> The R7 Spec 112.3.6 states that "SCR must unget any unreleased service 
>> objects" and it sounds to me that the system is supposed to clean itself up.
>> What am I missing.
> What am I missing?
> 
> Apart from a question mark.. that is.

Hi Paul,

You are correct in your interpretation of the specification, however…

This only happens if you use ComponentServiceObjects, not ServiceObjects (which 
is why this type was added to the DS spec). If you use ServiceObjects directly 
then SCR cannot reference count them and cannot help you.
The “leaked” instances are only cleaned up when your component is disposed by 
SCR (for example if it becomes unsatisfied).

In this case we *are* using ComponentServiceObjects (good) but we need to 
dispose of the referenced instance when the UI view is closed.

If we left it up to SCR to clean up, and our component wasn’t 
deactivated/disposed between UI sessions then we would have a memory leak. In 
general when you use ComponentServiceObjects you should think about the 
lifecycle of the objects you create, and how they are going to be released. In 
this case the component may get an arbitrarily large (and increasing) number of 
instances over time, so it must also dispose of them. If the example just 
grabbed 2 (or 5, or 10) instances at activation and used them until 
deactivation then it would not be necessary to release them (SCR would do it 
for us).

I hope that this makes sense,

Tim


>> 
>> Paul Fraser
>> ___
>> OSGi Developer Mail List
>> osgi-dev@mail.osgi.org <mailto:osgi-dev@mail.osgi.org>
>> https://mail.osgi.org/mailman/listinfo/osgi-dev 
>> <https://mail.osgi.org/mailman/listinfo/osgi-dev>
>> 
> 
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org <mailto:osgi-dev@mail.osgi.org>
> https://mail.osgi.org/mailman/listinfo/osgi-dev 
> <https://mail.osgi.org/mailman/listinfo/osgi-dev>
___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Disposing component instances

2018-08-21 Thread Tim Ward via osgi-dev
Have you looked at what the OSC project does? It uses Vaadin, and uses the 
ViewProvider interface to provide view instances. These automatically have a 
detach listener added on creation so that they get correctly disposed when 
their parent container is closed. 

See 
https://github.com/opensecuritycontroller/osc-core/blob/4441c96fe49e4b11ce6f380a440367912190a246/osc-ui/src/main/java/org/osc/core/broker/view/OSCViewProvider.java#L60-L67
 

 for details.

Tim

> On 21 Aug 2018, at 09:56, Alain Picard via osgi-dev  
> wrote:
> 
> When using factory components, you get a ComponentInstance and you should 
> dispose once done. For services with prototype scope, you get 
> ComponentServiceObjects from which you need to ungetService after use.
> 
> I have some cases where I am creating some UI Widgets and where I don't have 
> a well defined lifecycle that my class controls. It is expected to hand over 
> a widget and the widget will be "disposed" when necessary. The widget can add 
> actions when it is "disposed" or a listener can be added to watch for such 
> events with the UI framework.
> 
> What is the best way to handle those and is either factory component or CSO 
> better that the other, in this case and others? I was thinking that I might 
> implement a factory for managed widgets that keeps a map of provided widgets 
> and registers for events of disposed widgets and cleans up at that time, but 
> feels like an added moving part.
> 
> Alain 
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Docker configuration via environment variables

2018-08-21 Thread Tim Ward via osgi-dev
I’m not sure where you think the ordering problem is between the Configurator, 
the Configuration Bundle and the bundle being configured…

As for the Configuration Plugin, I would expect a ConfigurationPlugin like this 
to be a feature of the launcher, being the one part of the system with at least 
some reason to couple to the environment.

Best Regards,

Tim

> On 21 Aug 2018, at 09:21, Peter Kriens  wrote:
> 
>> On 21 Aug 2018, at 10:11, Tim Ward via osgi-dev > <mailto:osgi-dev@mail.osgi.org>> wrote:
>> Just another vote in favour of the ConfigurationPlugin model - you can use 
>> this to post-process configurations wherever they come from (meaning it 
>> isn’t tied to the Configurer or Configurator).
>> A configuration plugin that does this sort of work is easy to write, and if 
>> using DS could be done in a lot less than 100 LoC. It can also look at 
>> things other than environment variables if you want, and if/else logic is 
>> much easier to write/maintain in Java code than it is in macros in a JSON 
>> file!
> 
> It is only easier after you sold the ordering problem of the 
> 
> * Configurator, 
> * Configuration bundle, and 
> * Configuration plugin bundle … 
> 
> It also adds quite a bit of complexity by:
> 
> * Separating the rules from the actual configuration, and 
> * Adding extra bundles.
> 
> This additional complexity is only worth it if you can reuse the rules in 
> many different places. Hmm. Maybe a configuration plugin with a macro 
> processor? :-)
> 
>   Peter Kriens
> 
>> 
>> Best Regards,
>> 
>> Tim
>> 
>>> On 20 Aug 2018, at 17:08, Mark Hoffmann via osgi-dev 
>>> mailto:osgi-dev@mail.osgi.org>> wrote:
>>> 
>>> Hi all,
>>> 
>>> Carsten Ziegeler pointed us to the Configuration Plugin Services, that are 
>>> part of the ConfigurationAdmin specification. Together with the 
>>> Configurator specification, it could be possible to do that substitution in 
>>> such an plugin.
>>> 
>>> Regards,
>>> 
>>> Mark
>>> 
>>> 
>>> Am 20.08.2018 um 17:56 schrieb Christian Schneider via osgi-dev:
>>>> I think this would be a good extension to the configurator to also allow 
>>>> env variable replacement.
>>>> Actually I hoped it would already do this...
>>>> WDYT?
>>>> 
>>>> Christian
>>>> 
>>>> Am Mo., 20. Aug. 2018 um 17:05 Uhr schrieb Peter Kriens via osgi-dev 
>>>> mailto:osgi-dev@mail.osgi.org>>:
>>>> Are you using v2Archive enRoute or the new one?
>>>> 
>>>> The v2Archive OSGi enRoute has the simple Configurer (the predecessor of 
>>>> the OSGi R7 Configurator but with, according to some, a better name :-). 
>>>> It runs things through the macro processor you could therefore use 
>>>> environment variables to make the difference. 
>>>> 
>>>> E.g. ${env;XUZ} in the json files. Since it also supports ${if} you can 
>>>> eat your heart out! You can set environment variables in docker with -e in 
>>>> the command line when you start the container. You can also use @{ instead 
>>>> of ${ to not run afoul of the bnd processing that can happen at build 
>>>> time. I.e. the Configurer replaces all @{…} with ${…}.
>>>> 
>>>> If you are using the new R7 Configurator then you are on your own ...
>>>> 
>>>> Kind regards,
>>>> 
>>>> Peter Kriens
>>>> 
>>>> 
>>>> 
>>>> 
>>>> > On 18 Aug 2018, at 18:51, Randy Leonard via osgi-dev 
>>>> > mailto:osgi-dev@mail.osgi.org>> wrote:
>>>> > 
>>>> > To all:
>>>> > 
>>>> > We are at the point where we are deploying our OSGI enRoute applications 
>>>> > via Docker.
>>>> > 
>>>> > - A key sticking point is the syntax for embedding environment variables 
>>>> > within our configuration.json files.  
>>>> > - For example, a developer would set a hostName to ‘localhost’ for 
>>>> > development, but this same environment variable would be different for 
>>>> > QA, UAT, and Production environments
>>>> > - I presume this is the best way of allowing the same container to be 
>>>> > deployed in different environments without modification?
>>>> > - Suggestions and/or examples are appreciated.
>>>> > 
>>>> > 
>>>

Re: [osgi-dev] Docker configuration via environment variables

2018-08-21 Thread Tim Ward via osgi-dev
Just another vote in favour of the ConfigurationPlugin model - you can use this 
to post-process configurations wherever they come from (meaning it isn’t tied 
to the Configurer or Configurator).

A configuration plugin that does this sort of work is easy to write, and if 
using DS could be done in a lot less than 100 LoC. It can also look at things 
other than environment variables if you want, and if/else logic is much easier 
to write/maintain in Java code than it is in macros in a JSON file!

Best Regards,

Tim

> On 20 Aug 2018, at 17:08, Mark Hoffmann via osgi-dev  
> wrote:
> 
> Hi all,
> 
> Carsten Ziegeler pointed us to the Configuration Plugin Services, that are 
> part of the ConfigurationAdmin specification. Together with the Configurator 
> specification, it could be possible to do that substitution in such an plugin.
> Regards,
> 
> Mark
> 
> Am 20.08.2018 um 17:56 schrieb Christian Schneider via osgi-dev:
>> I think this would be a good extension to the configurator to also allow env 
>> variable replacement.
>> Actually I hoped it would already do this...
>> WDYT?
>> 
>> Christian
>> 
>> Am Mo., 20. Aug. 2018 um 17:05 Uhr schrieb Peter Kriens via osgi-dev 
>> mailto:osgi-dev@mail.osgi.org>>:
>> Are you using v2Archive enRoute or the new one?
>> 
>> The v2Archive OSGi enRoute has the simple Configurer (the predecessor of the 
>> OSGi R7 Configurator but with, according to some, a better name :-). It runs 
>> things through the macro processor you could therefore use environment 
>> variables to make the difference. 
>> 
>> E.g. ${env;XUZ} in the json files. Since it also supports ${if} you can eat 
>> your heart out! You can set environment variables in docker with -e in the 
>> command line when you start the container. You can also use @{ instead of ${ 
>> to not run afoul of the bnd processing that can happen at build time. I.e. 
>> the Configurer replaces all @{…} with ${…}.
>> 
>> If you are using the new R7 Configurator then you are on your own ...
>> 
>> Kind regards,
>> 
>> Peter Kriens
>> 
>> 
>> 
>> 
>> > On 18 Aug 2018, at 18:51, Randy Leonard via osgi-dev 
>> > mailto:osgi-dev@mail.osgi.org>> wrote:
>> > 
>> > To all:
>> > 
>> > We are at the point where we are deploying our OSGI enRoute applications 
>> > via Docker.
>> > 
>> > - A key sticking point is the syntax for embedding environment variables 
>> > within our configuration.json files.  
>> > - For example, a developer would set a hostName to ‘localhost’ for 
>> > development, but this same environment variable would be different for QA, 
>> > UAT, and Production environments
>> > - I presume this is the best way of allowing the same container to be 
>> > deployed in different environments without modification?
>> > - Suggestions and/or examples are appreciated.
>> > 
>> > 
>> > 
>> > Thanks,
>> > Randy Leonard
>> > 
>> > ___
>> > OSGi Developer Mail List
>> > osgi-dev@mail.osgi.org 
>> > https://mail.osgi.org/mailman/listinfo/osgi-dev 
>> > 
>> 
>> ___
>> OSGi Developer Mail List
>> osgi-dev@mail.osgi.org 
>> https://mail.osgi.org/mailman/listinfo/osgi-dev 
>> 
>> 
>> -- 
>> -- 
>> Christian Schneider
>> http://www.liquid-reality.de 
>> 
>> Computer Scientist
>> http://www.adobe.com 
>> 
>> 
>> 
>> ___
>> OSGi Developer Mail List
>> osgi-dev@mail.osgi.org 
>> https://mail.osgi.org/mailman/listinfo/osgi-dev 
>> 
> -- 
> Mark Hoffmann
> M.A. Dipl.-Betriebswirt (FH)
> Geschäftsführer
> 
> Tel:+49 3641 384 910 0
> Mobil:  +49 175 701 2201  
> E-Mail: m.hoffm...@data-in-motion.biz 
> Web: www.datainmotion.de  
> 
> Data In Motion Consulting GmbH
> Kahlaische Straße 4
> 07745 Jena
> 
> Geschäftsführer
> Mark Hoffmann
> Jürgen Albert
> 
> Jena HRB 513025
> Steuernummer 162/107/05779
> USt-Id DE310002614
> 
> 
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Question about consistency and visibility

2018-08-14 Thread Tim Ward via osgi-dev
> DS guarantees that activate is called before the service is registered and 
> thus available to others.

This is not quite correct (although it is correct for immediate components). 
The service is registered after it becomes satisfied (all of its mandatory 
service and configuration dependencies are available) but before it is 
activated. This is because DS is lazy, and won’t instantiate/activate a 
component until it is needed. Therefore your activate method is not called 
until someone tries to get your component’s service, at which point the 
activate method is called and run to successful completion before the service 
is given to the requesting bundle.

You can rely on the activate method having successfully completed before the 
instance is available to any other bundle. This is the important part of the 
guarantee from a happens before perspective, and is the reason that using a 
static policy is both simple and thread safe. Obviously if your component is 
immediate then it will be activated as soon as it is satisfied. If the 
immediate component advertises a service then nobody will be able to access the 
instance until after the activate method returns.

Best Regards,

Tim

> On 14 Aug 2018, at 07:33, Peter Kriens via osgi-dev  
> wrote:
> 
> Your understanding is correct with respect to the _service_. DS guarantees 
> that activate is called before the service is registered and thus available 
> to others.  The service is unregistered before you deactivate is called. 
> Nulling fields is generally unnecessary unless you want to ensure an NPE if 
> an object uses your service after unregistering.
> 
> Kind regards,
> 
>   Peter Kriens
> 
>> On 14 Aug 2018, at 05:20, David Leangen via osgi-dev > > wrote:
>> 
>> 
>> Hi!
>> 
>> In a concurrent system, if a class is immutable, the problem is simplified 
>> and the class can be used without fear by multiple threads because (i) it’s 
>> state does not change, and (2) it’s state is guaranteed to be visible;
>> 
>> Example:
>> 
>> /**
>>  * The class is immutable because the fields are both immutable types
>>  * and are "private + final”. The fields are guaranteed to be visible
>>  * to all threads after construction. In other words, there is a
>>  * “happens-before” constraint on the fields.
>>  */
>> public class SimpleImmutableClass {
>> private final String value1;
>> private final int value2;
>> 
>> public SimpleImmutableClass( String aString, int anInt ) {
>> value1 = aString;
>> value2 = anInt;
>> }
>> 
>> public String getValue1() {
>> return value1;
>> }
>> 
>> public int getvalue2() {
>> return value2;
>> }
>> }
>> 
>> My understanding is that DS will provide the same happens-before constraint 
>> to the fields in the following service, so presuming that there is no method 
>> exposed to change the field values, the service is effectively immutable and 
>> can be used without fear in a concurrent context. So in the following, 
>> value1 and value2 are guaranteed to be visible to all threads thanks to the 
>> happens-before constraint placed on the fields during activation:
>> 
>> /**
>>  * The LogService is basically just added to show that the component is used
>>  * in a static way, as only a static component can be effectively immutable.
>>  */
>> @Component
>> public class EffectivelyImmutableService {
>> private String value1;
>> private int value2;
>> 
>> @Reference private LogService logger;
>> 
>> @Activate
>> void activate( Map properties ) {
>> value1 = (String)properties.get( "value1" );
>> value2 = (int)properties.get( "value2" );
>> }
>> 
>> /**
>>  * H, but if an instance is never reused, then wouldn't it be 
>> completely
>>  * unnecessary to deactivate()??
>>  */
>> void deactivate() {
>> value1 = null;
>> value2 = -1;
>> }
>> 
>> public String getValue1() {
>> logger.log( LogService.LOG_INFO, String.format( "Value of String is 
>> %s", value1 ) );
>> return value1;
>> }
>> 
>> public int getvalue2() {
>> logger.log( LogService.LOG_INFO, String.format( "Value of int is 
>> %s", value2 ) );
>> return value2;
>> }
>> }
>> 
>> 
>> Is somebody able to confirm my understanding?
>> 
>> Thanks!!
>> =David
>> 
>> ___
>> OSGi Developer Mail List
>> osgi-dev@mail.osgi.org 
>> https://mail.osgi.org/mailman/listinfo/osgi-dev
> 
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Eclipse Extension-points and EMF in OSGI

2018-08-09 Thread Tim Ward via osgi-dev
I would expect that Mark Hoffman or Jürgen Albert might have some useful 
pointers, I’m pretty sure that they’re heavy users of EMF.

Best Regards,

Tim

> On 9 Aug 2018, at 09:20, Alain Picard via osgi-dev  
> wrote:
> 
> Scott,
> 
> I noticed the split of the o.e.core.runtime and am already using the 
> o.e.equinox.common + supplement and running some stuff like that with Felix. 
> But that part doesn't include of the support for extension points that is in 
> the other "half", hence my question.
> 
> Alain
> 
> 
> On Thu, Aug 9, 2018 at 12:18 AM Scott Lewis via osgi-dev 
> mailto:osgi-dev@mail.osgi.org>> wrote:
> IOn 8/8/2018 7:43 AM, Alain Picard via osgi-dev wrote:
> > Working through our move from RCP to a generic OSGI solution, and I am 
> > stuck with a couple of questions.
> >
> > There was an issue a while ago for EMF that resulted in a generation 
> > setting to support generic OSGI frameworks and not only 
> > Eclipse/Equinox. But the resulting bundles still have plugin.xml and 
> > expose extension points. My understanding is that this part of Eclipse 
> > is not covered in the portable part of o.e.core.runtime. We also have 
> > a number of our own extension-points, some that we have already 
> > converted and others that are still around.
> >
> > So anyone has successfully used EMF and/or Extension points outside of 
> > a full Eclipse environment?
> 
> Yes wrt extension registry/extension points.
> 
> o.e.core.runtime is a split package, split between bundles 
> o.e.equinox.common and o.e.equinox.registry
> 
> I'm not sure of the justification for split packages, but I think it was 
> done to maintain backward compatibility in eclipse plugins.
> 
> The version I used was a few years ago, but at that time these two 
> bundles...along with equinox...would run the extension registry (i.e. 
> process extension points/extensions on startup).  AFAIK that's still the 
> case.
> 
> If you want to use a framework other than equinox, I know for certain 
> that o.e.equinox.common works just fine on Felix...as long as one also 
> includes this bundle [1].
> 
> I don't think EMF requires anything in addition to o.e.equinox.common 
> and o.e.equinox.registry but I'm not completely sure about that.
> 
> Scott
> 
> [1] org.eclipse.equinox.supplement  - available via equinox or maven central
> 
> 
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org 
> https://mail.osgi.org/mailman/listinfo/osgi-dev 
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Reference not injected to component

2018-08-07 Thread Tim Ward via osgi-dev
According to the documentation KeyCloak can be provided as a Servlet Filter, 
which would avoid the need for a web.xml/WAB. 
https://www.keycloak.org/docs/2.5/securing_apps/topics/oidc/java/servlet-filter-adapter.html#_servlet_filter_adapter

Best Regards,

Tim

Sent from my iPhone

> On 7 Aug 2018, at 17:31, Nhut Thai Le  wrote:
> 
> KeycloakSecurityContext
___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Reference not injected to component

2018-08-07 Thread Tim Ward via osgi-dev
You are aware that Servlet Filters are also supported by the whiteboard? See 
https://osgi.org/specification/osgi.cmpn/7.0.0/service.http.whiteboard.html#d0e121055
 


Just to reiterate my warning, continuing to develop a WAB will significantly 
affect your ability to reliably take advantage of OSGi specifications and 
services from your web code going forward. If you do have more problems then 
you will likely find that the future advice from this list is that you should 
migrate away from using a WAB.

Best Regards,

Tim

> On 7 Aug 2018, at 16:12, Nhut Thai Le  wrote:
> 
> Thanks Tim,
> 
> I'll stick with the WAB for now and use the BundleContext to get my service 
> since I need to config some security filter on the web.xml.
> 
> Thai
> 
> On Tue, Aug 7, 2018 at 4:15 AM, Tim Ward  > wrote:
> Hi,
> 
> I’m afraid that if you’re using a WAB file then you absolutely can’t use DS, 
> and vice versa. The Web Application Bundle specification exists as a 
> mechanism to allow people to move from a non-OSGi world into OSGi, and there 
> are a number of restrictions as a result. The one that you’re hitting is that 
> in a Web Application the Servlet container is responsible for instantiating 
> and managing the lifecycle of the Servlet instances. As a result you are 
> getting two instances created, one by DS which is injecting the AdminBroker 
> service, and one by the Servlet Container which isn’t injecting anything.
> 
> Assuming that this is a new project then by far the simplest way to fix this 
> is to completely avoid making a WAB, and just to use the Http Whiteboard. 
> This will simplify things immensely, and handle the service dynamics easily. 
> If you can’t avoid using the WAB then you do have access to the BundleContext 
> in your ServletContext (see 
> https://osgi.org/specification/osgi.cmpn/7.0.0/service.war.html#d0e101441 
> ). 
> You would have to use this to get the service you want to use (and release 
> it, and deal with what happens if it isn’t available).
> 
> In summary, WABs exist for specific use cases when you can’t be properly 
> modular, or for when you have to work both inside and outside OSGi. It’s not 
> recommended to use them as your first development option.
> 
> I wish you luck with your experiments!
> 
> Best Regards,
> 
> Tim
> 
>> On 6 Aug 2018, at 22:10, Nhut Thai Le > > wrote:
>> 
>> Hi Tim,
>> 
>> The servlet is inside a WAB file which has a web.xml:
>> 
>> 
>> http://java.sun.com/xml/ns/javaee 
>> "
>>  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance 
>> "
>>  xsi:schemaLocation="http://java.sun.com/xml/ns/javaee 
>>  
>> http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd 
>> "
>>  version="3.0">
>>  
>>  HelloWorld
>>  com.webapp.HelloWorld
>>  
>> 
>>  
>>  HelloWorld
>>  /
>>  
>> 
>> 
>> I'm not trying to use the whiteboard pattern in the first place but rather 
>> looking for a way to inject my AdminBroker service into my servlet so I turn 
>> my servlet into a component in order to use the reference.
>> 
>> Thai
>> 
>> 
>> On Mon, Aug 6, 2018 at 4:44 PM, Tim Ward > > wrote:
>> I don’t see any Http Service whiteboard properties on the Servlet component. 
>> How are you registering it with the Servlet container?
>> 
>> Tim
>> 
>> Sent from my iPhone
>> 
>> On 6 Aug 2018, at 16:22, Nhut Thai Le via osgi-dev > > wrote:
>> 
>>> Hello,
>>> 
>>> I have a servlet defined like this:
>>> 
>>> @Component(service = Servlet.class)
>>> public class HelloWorld extends HttpServlet {
>>> @Reference(cardinality = ReferenceCardinality.MANDATORY)
>>> private AdminBroker adminBroker;
>>> 
>>> protected void doGet(HttpServletRequest request, HttpServletResponse 
>>> response) throws ServletException, IOException {   
>>> adminBroker.doSomething();
>>> }
>>> }
>>> 
>>> My AdminBroker implementation looks like:
>>> @Component(service=AdminBroker.class)
>>> public class AdminBrokerImpl implements AdminBroker {
>>> @Activate
>>> private void init() {
>>> String param1 = "some text";
>>> }
>>> }
>>> 
>>> When I started the env (felix 6 with pax-web), I can see the AdminBroker 
>>> instantiated (hit a break point in the init of my AdminBroker 
>>> implementation) but when the web request arrives and the doGet method is 
>>> called, the adminBroker is null.
>>> 
>>> Does anyone have an idea what may go wrong here?
>>> 
>>> Thai Le
>>> 
>>> -- 
>>> Castor Technologies Inc
>>> 460 rue St-Catherine St 
>>> 

Re: [osgi-dev] Reference not injected to component

2018-08-07 Thread Tim Ward via osgi-dev
Hi,

I’m afraid that if you’re using a WAB file then you absolutely can’t use DS, 
and vice versa. The Web Application Bundle specification exists as a mechanism 
to allow people to move from a non-OSGi world into OSGi, and there are a number 
of restrictions as a result. The one that you’re hitting is that in a Web 
Application the Servlet container is responsible for instantiating and managing 
the lifecycle of the Servlet instances. As a result you are getting two 
instances created, one by DS which is injecting the AdminBroker service, and 
one by the Servlet Container which isn’t injecting anything.

Assuming that this is a new project then by far the simplest way to fix this is 
to completely avoid making a WAB, and just to use the Http Whiteboard. This 
will simplify things immensely, and handle the service dynamics easily. If you 
can’t avoid using the WAB then you do have access to the BundleContext in your 
ServletContext (see 
https://osgi.org/specification/osgi.cmpn/7.0.0/service.war.html#d0e101441 
). 
You would have to use this to get the service you want to use (and release it, 
and deal with what happens if it isn’t available).

In summary, WABs exist for specific use cases when you can’t be properly 
modular, or for when you have to work both inside and outside OSGi. It’s not 
recommended to use them as your first development option.

I wish you luck with your experiments!

Best Regards,

Tim

> On 6 Aug 2018, at 22:10, Nhut Thai Le  wrote:
> 
> Hi Tim,
> 
> The servlet is inside a WAB file which has a web.xml:
> 
> 
> http://java.sun.com/xml/ns/javaee 
> "
>  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance 
> "
>  xsi:schemaLocation="http://java.sun.com/xml/ns/javaee 
>  
> http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd 
> "
>  version="3.0">
>   
>   HelloWorld
>   com.webapp.HelloWorld
>   
> 
>   
>   HelloWorld
>   /
>   
> 
> 
> I'm not trying to use the whiteboard pattern in the first place but rather 
> looking for a way to inject my AdminBroker service into my servlet so I turn 
> my servlet into a component in order to use the reference.
> 
> Thai
> 
> 
> On Mon, Aug 6, 2018 at 4:44 PM, Tim Ward  > wrote:
> I don’t see any Http Service whiteboard properties on the Servlet component. 
> How are you registering it with the Servlet container?
> 
> Tim
> 
> Sent from my iPhone
> 
> On 6 Aug 2018, at 16:22, Nhut Thai Le via osgi-dev  > wrote:
> 
>> Hello,
>> 
>> I have a servlet defined like this:
>> 
>> @Component(service = Servlet.class)
>> public class HelloWorld extends HttpServlet {
>>  @Reference(cardinality = ReferenceCardinality.MANDATORY)
>>  private AdminBroker adminBroker;
>> 
>>  protected void doGet(HttpServletRequest request, HttpServletResponse 
>> response) throws ServletException, IOException {   
>>  adminBroker.doSomething();
>>  }
>> }
>> 
>> My AdminBroker implementation looks like:
>> @Component(service=AdminBroker.class)
>> public class AdminBrokerImpl implements AdminBroker {
>>  @Activate
>>  private void init() {
>>  String param1 = "some text";
>>  }
>> }
>> 
>> When I started the env (felix 6 with pax-web), I can see the AdminBroker 
>> instantiated (hit a break point in the init of my AdminBroker 
>> implementation) but when the web request arrives and the doGet method is 
>> called, the adminBroker is null.
>> 
>> Does anyone have an idea what may go wrong here?
>> 
>> Thai Le
>> 
>> -- 
>> Castor Technologies Inc
>> 460 rue St-Catherine St 
>>  
>> Ouest, Suite 613 
>> Montréal, Québec H3B-1A7
>> (514) 360-7208 o
>> (514) 798-2044 f
>> n...@castortech.com 
>> www.castortech.com  
>> 
>> CONFIDENTIALITY NOTICE: The information contained in this e-mail is 
>> confidential and may be proprietary information intended only for the use of 
>> the individual or entity to whom it is addressed. If the reader of this 
>> message is not the intended recipient, you are hereby notified that any 
>> viewing, dissemination, distribution, disclosure, copy or use of the 
>> information contained in this e-mail message is strictly prohibited. If you 
>> have received and/or are viewing this e-mail in error, please immediately 
>> notify the sender by reply e-mail, and delete it from your system without 
>> reading, forwarding, copying or saving in any manner. Thank you.
>> AVIS DE CONFIDENTIALITE: L’information contenue dans ce message est 
>> confidentiel, peut être protégé par le secret professionnel et est réservé à 

Re: [osgi-dev] Reference not injected to component

2018-08-06 Thread Tim Ward via osgi-dev
I don’t see any Http Service whiteboard properties on the Servlet component. 
How are you registering it with the Servlet container?

Tim

Sent from my iPhone

> On 6 Aug 2018, at 16:22, Nhut Thai Le via osgi-dev  
> wrote:
> 
> Hello,
> 
> I have a servlet defined like this:
> 
> @Component(service = Servlet.class)
> public class HelloWorld extends HttpServlet {
>   @Reference(cardinality = ReferenceCardinality.MANDATORY)
>   private AdminBroker adminBroker;
> 
>   protected void doGet(HttpServletRequest request, HttpServletResponse 
> response) throws ServletException, IOException {   
>   adminBroker.doSomething();
>   }
> }
> 
> My AdminBroker implementation looks like:
> @Component(service=AdminBroker.class)
> public class AdminBrokerImpl implements AdminBroker {
>   @Activate
>   private void init() {
>   String param1 = "some text";
>   }
> }
> 
> When I started the env (felix 6 with pax-web), I can see the AdminBroker 
> instantiated (hit a break point in the init of my AdminBroker implementation) 
> but when the web request arrives and the doGet method is called, the 
> adminBroker is null.
> 
> Does anyone have an idea what may go wrong here?
> 
> Thai Le
> 
> -- 
> Castor Technologies Inc
> 460 rue St-Catherine St Ouest, Suite 613 
> Montréal, Québec H3B-1A7
> (514) 360-7208 o
> (514) 798-2044 f
> n...@castortech.com
> www.castortech.com 
> 
> CONFIDENTIALITY NOTICE: The information contained in this e-mail is 
> confidential and may be proprietary information intended only for the use of 
> the individual or entity to whom it is addressed. If the reader of this 
> message is not the intended recipient, you are hereby notified that any 
> viewing, dissemination, distribution, disclosure, copy or use of the 
> information contained in this e-mail message is strictly prohibited. If you 
> have received and/or are viewing this e-mail in error, please immediately 
> notify the sender by reply e-mail, and delete it from your system without 
> reading, forwarding, copying or saving in any manner. Thank you.
> AVIS DE CONFIDENTIALITE: L’information contenue dans ce message est 
> confidentiel, peut être protégé par le secret professionnel et est réservé à 
> l'usage exclusif du destinataire. Toute autre personne est par les présentes 
> avisée qu'il lui est strictement interdit de diffuser, distribuer ou 
> reproduire ce message. Si vous avez reçu cette communication par erreur, 
> veuillez la détruire immédiatement et en aviser l'expéditeur. Merci.
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev
___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Extending Services

2018-08-03 Thread Tim Ward via osgi-dev
I agree with Neil that, in general, you don’t want to reference a service of 
the same type that you advertise without something to prevent you wiring to 
yourself.

Service properties are a good way to do this, but so is type safety. For 
example does it really make sense for the AppSessionServiceImpl to advertise as 
a CoreSessionService? Would someone looking in the service registry for a 
CoreSessionService really be ok with having the ApplicationSessionService come 
back? In fact, what I’m trying to say is “is it really the right thing for 
these advertised interfaces to extend one another?”. 

The often stated rule is to prefer composition over inheritance, which the 
implementation is doing here, but the API isn’t. It seems odd to me that an 
“Application specific session service” would also be a “core platform session 
service” as well as a generic session service. You might well have some 
interfaces defining common the methods, but the advertised service interfaces 
really feel like they should be different type hierarchies. At this point you 
would no longer run the risk of wiring to a service that isn’t appropriate (for 
example one application delegating to another application seems like a bad 
idea).

Best Regards,

Tim

> On 3 Aug 2018, at 13:41, Neil Bartlett via osgi-dev  
> wrote:
> 
> Are these DS components?
> 
> I'm not entirely sure what would happen if a component both provides a 
> service and binds to the same service type. In theory it might be able to 
> bind to itself, especially if it is a lazy (delayed) service component, 
> because then the service registration exists before the component is 
> activated. But possibly SCR prevents this scenario, I'm not sure.
> 
> A safe way to protect against this regardless is to use properties and 
> filters. For example the AppSessionServiceImpl can provide the SessionService 
> with a property such as app=true. Then it would bind to SessionService with a 
> target filter of (!(app=*)).
> 
> Neil
> 
> On Fri, Aug 3, 2018 at 1:17 PM, Alain Picard via osgi-dev 
> mailto:osgi-dev@mail.osgi.org>> wrote:
> Facing an issue and looking for the right pattern to apply where I have a 
> service interface that is extended and at run time I want to always run the 
> most appropriate one. Each extension provides additional method to the API.
> 
> As an example (posted here: https://github.com/castortech/osgi-test-session 
> ), there's a 
> CoreSessionService ifc (no direct implementation) that will be used by core 
> part of the code. There is an extension to it (SessionService ifc) that adds 
> some new methods and can be used by common level modules and then we have the 
> AppSessionService that is app specific and can exist for some apps and not 
> for other, which would then rely on the SessionServiceImpl.
> 
> My issue is that for example the AppSessionServiceImpl needs a reference to 
> the SessionService and this either creates a circular dependency (tried to 
> use the Top/Bottom approach in the seminal Concurrency paper from enRoute) 
> but that is not working so far.
> 
> Any hints on how to proceed here.
> 
> Thanks
> Alain
> 
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org 
> https://mail.osgi.org/mailman/listinfo/osgi-dev 
> 
> 
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Life-cycle race condition

2018-08-02 Thread Tim Ward via osgi-dev
As Peter has said, once you go asynchronous and/or long running the possibility 
that things can go wrong increases rapidly. That isn’t to say that your 
solution will need to be as involved as some of the ones that Peter is 
suggesting. It honestly sounds as though you could avoid this problem through 
the use of a couple of “closed” checks in the relevant components, but again it 
would be easier to tell for certain if you could share the code.

Best Regards,

Tim 

> On 2 Aug 2018, at 10:58, David Leangen via osgi-dev  
> wrote:
> 
> 
> Wow! That is a lot to digest.
> 
> I’ll need to get back to you in a few days/weeks/months/years. :-D
> 
> Thanks so much!!
> 
> 
> Cheers,
> =David
> 
> 
> 
> 
>> On Aug 2, 2018, at 18:38, Peter Kriens > > wrote:
>> 
>> 
>> ## Keep Passing the Open Windows
>> 
>> You did read the classic [v2Archive OSGi enRoute App note][5] about this 
>> topic? It has been archived by the OSGi to [v2Archive OSGi enRoute web 
>> site][3]. It handles a lot of similar cases. There is an accompanying 
>> workspace [v2Archive OSGi enRoute osgi.enroute.examples.concurrency 
>> ][7]
>> 
>> Anyway, I am not sure if you want to solve this pragmatic or pure?
>> 
>> ## Pragmatic 
>> 
>> Pragmatic means there is a tiny chance you hit the window where you check if 
>> the MyService is unregistered and then use it. If you're really unlucky you 
>> just hit the unregistration after you checked it but before you can use it. 
>> It works when the unregistration of MyService is rare and the work is long. 
>> Yes, it can fail but so can anything so you should be prepared for it. 
>> 
>> Pragmatic works best as follows:
>> 
>>@Component
>>public class MyClass extends Thread {   
>>   @Reference MyService myService;
>>
>>   @Activate void activate()  { start(); }
>>   @Deactivate void deactivate()  { interrupt(); }
>>
>>   public void run() {
>>  while (!isInterrupted()) {
>> try {
>> MyResult result = doHardWork();
>> if (!isInterrupted())
>> myService.setResult(result);
>> } catch (Exception e) { /* TODO */ }
>>  }
>>   }
>>}
>> 
>> Clearly there is a race condition. 
>> 
>> 
>> 
>> 
>> ## Pure 
>> 
>> I once had a use case where we had whiteboard listeners that received 
>> events. The frequency and some not so good event listeners that took too 
>> much time in their callback. This created a quite long window where it could 
>> fail so it often did. For that use case I created a special highly optimized 
>> class that could delay the removal of the listener while it was being 
>> dispatched. To make it have absolutely minimal overhead was tricky, I even 
>> made an Alloy model of it that found some design errors. Anyway, sometimes 
>> you have pick one of the bad sides, this was one where delaying the 
>> deactivate was worth it.
>> 
>> So how would you make this 'purer' by delaying the deactivation until you 
>> stopped using it? Since the service is still supposed to be valid during 
>> deactivate we could make the setResult() and the deactivate() methods 
>> exclude each other. That is, we need to make sure that no interrupt can 
>> happen when we check for the isInterrupted() and call myService.setResult(). 
>> We could use heavy locks but synchronized works fine for me when you realize 
>> some of its caveats:
>> 
>> * Short blocks
>> * Ensure you cannot create deadlocks
>> 
>> So there must be an explicit contract that the MyService is not going to 
>> stay away for a long time nor call lots of other unknown code that could 
>> cause deadlocks. After all, we're blocking the deactivate() method which is 
>> very bad practice in general. So you will trade off one purity for another.
>> 
>>@Component
>>public class MyClass extends Thread {   
>>   @Reference MyService myService;
>>
>>   @Activate void activate()  { start(); }
>>   @Deactivate synchronized void deactivate() { interrupt(); }
>>
>>   public void run() {
>>  while (!isInterrupted()) {
>> try {
>> MyResult result = doHardWork();
>>  synchronized(this) {
>> if (!isInterrupted()) {
>> myService.setResult(result);
>>  }
>>  }
>> } catch (Exception e) { /* TODO */ }
>>  }
>>   }
>>}
>> 
>> This guarantees what you want … However (you knew this was coming!) there is 
>> a reason the service gets deactivated. Even though the _service_ is still 
>> valid at that point, there is a reason the _service object_ indicated its 
>> unwillingness to play. For example, if MyService was remoted then the 
>> connection might have been lost. In general, when you call a service you 
>> should be prepared that it fails. (That is why 

Re: [osgi-dev] Life-cycle race condition

2018-08-02 Thread Tim Ward via osgi-dev
Hi David,

Are you able to share this code, because it sounds as though the thread 
signalling model you’re using is wrong? 

From what you’re saying it sounds like the problem is in this other component. 
Specifically, that it is performing long running work that isn’t paying 
attention to its own deactivation state. Equally, there’s no need for a 
"myServiceIsStillActive()” check because you should only ever care about 
whether your own component is active. If a service dependency is deactivated 
then it will be unregistered *first* so that the dependent component gets 
deactivated first. This stops you from having to care about anyone’s lifecycle 
but your own.

As for using shutdownNow(). This is *encouraged* to interrupt the threads, but 
not guaranteed. You also need to make sure to actually check the interrupt 
status. As it sounds like your component gives up control of the thread into 
another component which has the problem then this may require more extensive 
changes. I would need to see code to be sure.

Best Regards,

Tim

> On 2 Aug 2018, at 01:01, David Leangen via osgi-dev  
> wrote:
> 
> 
> Hi Tim,
> 
> Thanks, and this is good advice. The example you give is when the thread is 
> in the same component that is being deactivated. In this case, as you show, 
> it is quite trivial to track the activation state of the component in order 
> to shut down the thread.
> 
> In my case, the trouble I am having is that the long-running thread is in a 
> component that is different from the component that is getting deactivated. 
> For instance, building on your example:
> 
> @Component
> public class MyClass {
> 
>// Note that I am using a STATIC service
>@Reference private MyService myService;
> 
>private final AtomicBoolean closed = new AtomicBoolean();
> 
>@Activate
>void start() {
>new Thread(this::longStart).run()
>}
> 
> 
>@Deactivate
>void stop() {
>closed.set(true);
>}
> 
>void longStart() {
>for(int i = 0; i < 100; i++) {
> 
>// This only works if the service object is not stateful, 
> otherwise we need
>// to do a check and throw away an intermediate invalidated result
> 
>// Understood, but unfortunately the service object is stateful.
> 
>// The problem is that the dependency can be deactivated at any 
> time, and this
>// is happening before “closed" in this component get set to 
> “true". I do not know how
>// to detect the deactivation of the dependency. I need to 
> determine this pre-emptively,
>// not after-the-fact. Otherwise the result will be destructive.
> 
>doSomethingWithMyService(myService);
> 
>// Ideally I would like to do something like this:
>if (myServiceIsStillActive())
>doSomethingWithMyService(myService);
>}
>}
> }
> 
> In the second example, there is a dynamic @Reference, so I see the point of 
> using an AtomicReference. However, I am using a static @Reference, so I doubt 
> that just putting in an AtomicReference will change the timing problem.
> 
> Any thoughts?
> 
> 
> 
> By the way, instead of using a “closed” variable, I am doing something like 
> this:
> 
> @Activate
> void activate()
> {
> executor = Executors.newSingleThreadExecutor();
> }
> 
> void deactivate()
> {
> executor.shutdownNow();
> }
> 
> Then I only need to test for Thread.interrupted(). I assume this has the same 
> effect as having the check for “closed".
> 
> Cheers,
> =David
> 
> 
> 
>> On Aug 1, 2018, at 16:59, Tim Ward > > wrote:
>> 
>> Hi David,
>> 
>> In addition to interrupting the worker thread (which is a good idea). There 
>> are a couple of useful things that you can do using the support from 
>> java.util.concurrent. For example, setting a closed state:
>> 
>> 
>> @Component
>> public class MyClass {
>> 
>>private final AtomicBoolean closed = new AtomicBoolean();
>> 
>>@Activate
>>void start() {
>>new Thread(this::longStart).run()
>>}
>> 
>> 
>>@Deactivate
>>void stop() {
>>closed.set(true);
>>}
>> 
>>void longStart() {
>>for(int i = 0; i < 100; i++) {
>>if(closed.get()) {
>>break;
>>}
>>doSomething();
>>}
>>}
>> }
>> 
>> Also if your references are dynamic then you should treat them carefully
>> 
>> @Component
>> public class MyClass implements MySlowService {
>> 
>>private final AtomicReference myRef = new AtomicReference<>();
>> 
>>@Reference(policy=DYNAMIC)
>>void setReference(MyService service) {
>>myRef.set(service)
>>}
>> 
>>void unsetReference(MyService service) {
>>// Note that it is *not* safe to just do a set null, see Compendium 
>> 112.5.12
>>myRef.compareAndSet(service, null);
>>}
>> 
>>public void 

Re: [osgi-dev] Life-cycle race condition

2018-08-01 Thread Tim Ward via osgi-dev
Hi David,

In addition to interrupting the worker thread (which is a good idea). There are 
a couple of useful things that you can do using the support from 
java.util.concurrent. For example, setting a closed state:


@Component
public class MyClass {

private final AtomicBoolean closed = new AtomicBoolean();

@Activate
void start() {
new Thread(this::longStart).run()
}


@Deactivate
void stop() {
closed.set(true);
}

void longStart() {
for(int i = 0; i < 100; i++) {
if(closed.get()) {
break;
}
doSomething();
}
}
}

Also if your references are dynamic then you should treat them carefully

@Component
public class MyClass implements MySlowService {

private final AtomicReference myRef = new AtomicReference<>();

@Reference(policy=DYNAMIC)
void setReference(MyService service) {
myRef.set(service)
}

void unsetReference(MyService service) {
// Note that it is *not* safe to just do a set null, see Compendium 
112.5.12
myRef.compareAndSet(service, null);
}

public void longRunningTask() {
for(int i = 0; i < 100; i++) {
// This only works if the service object is not stateful, otherwise 
we need
// to do a check and throw away an intermediate invalidated result

MyService myService = myRef.get();
doSomethingWithMyService(myService);
}
}
}

I hope you find these helpful.

Tim

> On 1 Aug 2018, at 05:44, David Leangen via osgi-dev  
> wrote:
> 
> 
> Hi!
> 
> I am running into a situation where, what I think is happening is:
> 
> Component A gets instantiated
> Component B
>  - references A
>  - gets satisfied once A is satisfied 
>  - kicks off a long-running process when one of its methods are called
>  - the long-running process is run in a different thread, with a Promise
> Component A is no longer satisfied
> But
>  - the long-running process is still running
>  - the long-running process now references an invalid Component A
>  - the long-running thread fails because of the invalid state of Component A
> Component B is no longer satisfied
> 
> 
> So, the long-running component messes things up, but its component has not 
> yet shut down even though its process is still happily running in another 
> thread.
> 
> I can think of two possible solutions, but not sure which is best and not 
> sure how to implement:
> 
> 1) Figure out a way to share an ExecutorService between “related” components 
> so that when one component 
>  shuts down it will signal to the other related components that their 
> threads are now interrupted
> 
> 2) In the long-running process, determine if the component that provides the 
> required service
>   is still active before continuing with the havoc-wreaking process
> 
> 
> Does this sound about right?
> 
> How would I actually accomplish either of these?
> 
> 
> Thanks!
> =David
> 
> 
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Setting reference target in component

2018-07-31 Thread Tim Ward via osgi-dev
This sounds like a variation on this question 
 asked a few 
weeks ago in the osgi-dev list.

In summary, there are a couple of ways to achieve what you’re trying to do, and 
using configuration admin may or may not be the best approach given that you 
want to request an instance which you directly control the lifecycle of. 
Hopefully the thread will give you the answer that you’re looking for.

Best Regards,

Tim

> On 31 Jul 2018, at 13:24, Alain Picard via osgi-dev  
> wrote:
> 
> I need to configure some Component to be session scoped. I have followed the 
> article from Dirk at 
> http://blog.vogella.com/2017/02/24/control-osgi-ds-component-instances-via-configuration-admin/
>  
> 
>  which matches what I want.
> 
> But in my case the Component that configures the service is also the has a 
> reference to this service. I read a couple of SO about this 
> (https://stackoverflow.com/questions/47393876/dynamically-setting-target-property-in-osgi-reference-annotatation
>  
> 
>  and 
> https://stackoverflow.com/questions/21166070/osgi-declarative-services-filter-references-at-runtime
>  
> )
>  but I'm struggling to figure out the best way to both specify a 
> configuration as per the 1st article and then set the target in my component 
> to the resulting instance (and/or getting a component configuration and set 
> the target for my service).
> 
> Thanks
> Alain
> 
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] OSGI enroute bndrun file

2018-07-27 Thread Tim Ward via osgi-dev
Hi Kevin,

You should never need to add SLF4J as a requirement, but you will need to add a 
logging implementation as a requirement if you wish to configure and use a 
specific logging implementation (for example logback). This, however, would be 
an important detail of your application because it would be dictated by the 
features you need and would affect the configuration that you write, as a 
result it ends up as an explicit requirement.

The SLF4J API, on the other hand, is simply a detail of the logging API used by 
your bundles, or of the API used by the implementation that you’re bridging to 
the OSGi log service. Does that make sense?

The run requirements are really just defining the things that you explicitly 
want to select. These will be things like application entry points (e.g. your 
Servlets/JAX-RS resources) and anything that you explicitly want to configure 
(and are therefore tied to the PID).

Best Regards,

Tim

> On 27 Jul 2018, at 16:33, Matthews, Kevin  
> wrote:
> 
> Ok thanks, your explanation helps Tim. However, I understand there is clear 
> distinction between both files and always need for dependency management 
> (pom/build.gradle). But, there seem to be some commonality between both files 
> when defining your bundles. For example sl4j is added as transitive 
> dependences in the pom but may also needed to add to your run requirements if 
> you using logservice bundle.
>  
> From: Tim Ward [mailto:tim.w...@paremus.com] 
> Sent: Friday, July 27, 2018 11:17 AM
> To: BJ Hargrave; OSGi Developer Mail List
> Cc: Matthews, Kevin
> Subject: Re: [osgi-dev] OSGI enroute bndrun file
>  
> Note that BJ is referencing one possible way to use the 
> bnd-resolver-maven-plugin and bnd-export-maven-plugin.
>  
> The enRoute projects set up their bndrun files to use standalone indexes 
> generated by the bnd-indexer-maven-plugin. These indexes are generated using 
> the dependencies from the pom file of the project. The benefit of this 
> solution is that indexes can be generated for different dependency scopes, 
> for example a testing index is generated which includes test-scoped 
> dependencies. This index is then used by the debug bndrun to include things 
> like the Felix Gogo shell and the Felix Web console, which are good for 
> debugging, but should not be available when resolving/deploying a production 
> application.
>  
> To answer your original question, therefore:
>  
> It seems, if we create an OSGI project we have to add dependencies to maven 
> or gradle pom.xml build.gradle then we have add dependencies to our composite 
> application .bndrun file.
>  
> Adding the dependencies to your pom.xml is a necessary step. This is because 
> the transitive dependency graph of your project determines the set of bundles 
> that are available to the resolve and export operations. Also adding a 
> dependency to your pom.xml does not require anything to be added to your 
> bndrun file (although it may change the output of your next call to the 
> resolver).
>  
> The requirement that you add to your bndrun is different and is not about 
> declaring dependencies. The bndrun runrequire instruction exists to define 
> the outer surface and structure of your application, which can be further 
> fine-tuned using blacklisting and other instructions. The closest analogy in 
> a non-OSGi project would be an assembly descriptor for the maven-assembly 
> plugin. The purpose of the bndrun is therefore to define the complete set of 
> bundles make up your application, with the bnd-resolver-maven-plugin using 
> your run requirements to generate that list for you. In general your run 
> requirements list should be very small (often only one requirement), and not 
> need updating even if the pom dependencies change.
>  
> Hopefully this explanation helps you to see the difference between the two 
> files and what they are trying to achieve.
>  
> Best Regards,
>  
> Tim
> 
> 
> On 27 Jul 2018, at 15:37, BJ Hargrave via osgi-dev  > wrote:
>  
> Well you, the developer, write the requirements (-runrequire) and the 
> bnd-resolver-maven-plugin can resolve these requirements into a set of 
> bundles (-runbundles). The resolver process will have access to the maven 
> project dependencies (via a FileSetRepository) to resolve the requirements 
> into a set of bundles.
>  
> See 
> https://github.com/osgi/osgi.enroute/tree/master/examples/microservice/rest-service-test
>  
> .
>  This project has a bndrun with -runrequires and the pom configures the 
> bnd-resolver-maven-plugin for the project. 
> 

Re: [osgi-dev] OSGI enroute bndrun file

2018-07-27 Thread Tim Ward via osgi-dev
Note that BJ is referencing one possible way to use the 
bnd-resolver-maven-plugin and bnd-export-maven-plugin.

The enRoute projects set up their bndrun files to use standalone indexes 
generated by the bnd-indexer-maven-plugin. These indexes are generated using 
the dependencies from the pom file of the project. The benefit of this solution 
is that indexes can be generated for different dependency scopes, for example a 
testing index is generated which includes test-scoped dependencies. This index 
is then used by the debug bndrun to include things like the Felix Gogo shell 
and the Felix Web console, which are good for debugging, but should not be 
available when resolving/deploying a production application.

To answer your original question, therefore:

> It seems, if we create an OSGI project we have to add dependencies to maven 
> or gradle pom.xml build.gradle then we have add dependencies to our composite 
> application .bndrun file.
> 

Adding the dependencies to your pom.xml is a necessary step. This is because 
the transitive dependency graph of your project determines the set of bundles 
that are available to the resolve and export operations. Also adding a 
dependency to your pom.xml does not require anything to be added to your bndrun 
file (although it may change the output of your next call to the resolver).

The requirement that you add to your bndrun is different and is not about 
declaring dependencies. The bndrun runrequire instruction exists to define the 
outer surface and structure of your application, which can be further 
fine-tuned using blacklisting and other instructions. The closest analogy in a 
non-OSGi project would be an assembly descriptor for the maven-assembly plugin. 
The purpose of the bndrun is therefore to define the complete set of bundles 
make up your application, with the bnd-resolver-maven-plugin using your run 
requirements to generate that list for you. In general your run requirements 
list should be very small (often only one requirement), and not need updating 
even if the pom dependencies change.

Hopefully this explanation helps you to see the difference between the two 
files and what they are trying to achieve.

Best Regards,

Tim

> On 27 Jul 2018, at 15:37, BJ Hargrave via osgi-dev  
> wrote:
> 
> Well you, the developer, write the requirements (-runrequire) and the 
> bnd-resolver-maven-plugin can resolve these requirements into a set of 
> bundles (-runbundles). The resolver process will have access to the maven 
> project dependencies (via a FileSetRepository) to resolve the requirements 
> into a set of bundles.
>  
> See 
> https://github.com/osgi/osgi.enroute/tree/master/examples/microservice/rest-service-test
>  
> .
>  This project has a bndrun with -runrequires and the pom configures the 
> bnd-resolver-maven-plugin for the project. See 
> https://enroute.osgi.org/examples/020-examples-microservice.html#the-integration-testbndrun
>  
> 
>  for some information on this.
> --
> 
> BJ Hargrave
> Senior Technical Staff Member, IBM // office: +1 386 848 1781
> OSGi Fellow and CTO of the OSGi Alliance // mobile: +1 386 848 3788
> hargr...@us.ibm.com
>  
>  
> - Original message -
> From: "Matthews, Kevin" 
> To: BJ Hargrave 
> Cc: "osgi-dev@mail.osgi.org" 
> Subject: RE: [osgi-dev] OSGI enroute bndrun file
> Date: Fri, Jul 27, 2018 10:21 AM
>  
> Thanks BJ. So, there is a bnd plugin referencing the FileSetRepository using 
> the new enroute to write bundle(s) run time requirements to the bndrun file? 
> Because I still have add my bundle dependencies/versions in this file. I 
> think there are bnd plugins that resolve and validates the component 
> dependencies for wiring.
> 
>  
> 
> From: BJ Hargrave [mailto:hargr...@us.ibm.com]
> Sent: Friday, July 27, 2018 10:04 AM
> To: Matthews, Kevin
> Cc: osgi-dev@mail.osgi.org
> Subject: RE: [osgi-dev] OSGI enroute bndrun file
> 
>  
> 
> Well enRoute is currently maven based and uses the Bnd maven plugins which 
> themselves use the FileSetRepository. One of the links was to one of the Bnd 
> maven plugins using it. So you don't need to do anything to use this support. 
> It is part of the maven and gradle plugin's support.
> 
>  
> 
> --
> 
> BJ Hargrave
> Senior Technical Staff Member, IBM // office: +1 386 848 1781
> OSGi Fellow and CTO of the OSGi Alliance // mobile: +1 386 848 3788
> hargr...@us.ibm.com
> 
>  
> 
>  
> 
> - Original message -
> From: "Matthews, Kevin" 
> To: BJ Hargrave , "osgi-dev@mail.osgi.org" 
> 
> Cc:
> Subject: RE: [osgi-dev] OSGI enroute bndrun file
> Date: Fri, Jul 27, 2018 8:31 AM
>  
> 
> How do we use these java and groovy file in enroute OSGI project?
> 
> For, gradle project do we reference FileSetRepositoryConvention  as task in 
> the build.gradle
> 
>  
> 
> From: BJ 

Re: [osgi-dev] Possible issue

2018-07-26 Thread Tim Ward via osgi-dev
Hi Julio,

I’m not quite sure that I follow your problem. The quickstart tutorial uses the 
project template, and creates the following module structure:

quickstart/
  |
  |- app/
  | |
  | |-pom.xml
  |
  |- impl/
  |  |
  |  |- pom.xml
  |
  |- pom.xml

The artifactId “quickstart” is given to the reactor pom, which is also used as 
the parent. The other modules are called “app” and “impl” as per the project 
template. This does not result in any collisions.

My guess is that you are doing something else and not using the project 
archetype to create the quickstart example.

Best Regards,

Tim Ward

> On 26 Jul 2018, at 13:25, Julio Calvo via osgi-dev  
> wrote:
> 
> Dear friends from OSG enRoute,
>  
> In https://enroute.osgi.org/tutorial/020-tutorial_qs.html 
>  there is a possible 
> issue or at least I don’t find the way to follow the tutorial unless I change 
> a small part of the project setup you explain. 
>  
> In Recreating Quick Start – Project Setup: Define value for property 
> 'artifactId': quickstart
>  
> If the user write quickstart following your suggestion, there will be a 
> problem with collisions in pom.xml files between parentId and artefactId 
> (will name the same), taking into account that folder names will be the same 
> quickstart/quickstart.
>  
> Please, let me know if I am talking no sense.
>  
> Many thanks for your help and time.
>  
>  
>  
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org 
> https://mail.osgi.org/mailman/listinfo/osgi-dev 
> 
___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Dealing with bouncing

2018-07-23 Thread Tim Ward via osgi-dev
An alternative way to look at this is simply that your DS component needs to be 
sufficiently thread-safe to deal with the consequences of its own internal 
threading model. In this case:

> What would be a good (and simple) strategy to handle this type of 
> long-running configuration, where the configuration is in a different thread 
> and depends on services that may come and go?

Long-running asynchronous activation isn’t where DS really shines for several 
reasons:

There’s no way to tell DS that your component activation is asynchronous. Once 
the activate method returns then DS will give your object to the client bundle. 
As you’ve already discovered, Promises can make this much simpler to deal with.
As you’ve seen, the DS “happens before” model is based on when your activate 
method returns. Once you go into another thread you have to deal with the 
consequences of that
If your long-running activation fails, then it’s too late to tell DS that the 
component is broken. To be safe here you really need to disable your component 
on failure.

That said, it is perfectly possible to activate asynchronously in a safe way. 
It’s just very important to signal to any thread(s) that you started that they 
need to stop and abandon their work. Thread.interrupt() is useful here, as is 
having an AtomicBoolean flag in your component that you set to false during 
deactivation. These states should be checked regularly during startup, and the 
“startup thread" should be prepared to abandon any work it has done if/when it 
has become invalid.

Best Regards,

Tim

> On 23 Jul 2018, at 08:00, Peter Kriens via osgi-dev  
> wrote:
> 
> 
> 
> Ok … on the top of my head …
> 
> 
>   public interface Bar {
>   void m1();
>   void m2();
>   }
> 
>   @Component 
>   public class BarImpl implements Bar {
>   Deferred   delegate = new Deferred<>();
> 
>   @Reference
>   void setExec( Executor e ) {
>   delegate.resolve( new BarImpl2(e) );
>   }
> 
> 
>   public void m1() {
>   delegate.getPromise().m1();
>   }   
> 
>   public void m2() {
>   delegate.getPromise().m2();
>   }   
>   }
> 
> This works for you?
> 
> Kind regards,
> 
>   Peter Kriens
> 
> 
>   
>> On 22 Jul 2018, at 22:51, David Leangen via osgi-dev 
>>  wrote:
>> 
>> 
>> Hi Peter,
>> 
>> Thanks for the tip.
>> 
>> I’m not quite getting it. Would you be able to direct me to an example?
>> 
>> Thanks!
>> =David
>> 
>> 
>> 
>>> On Jul 22, 2018, at 21:49, Peter Kriens  wrote:
>>> 
>>> In some cases (when the extra complexity was warranted) I let the component 
>>> class act as a proxy to a delegate. I then get the delegate from a  
>>> Promise. So you just forward every method in your service interface to the 
>>> delegate. There is a function in Eclipse that will create the delegation 
>>> methods.
>>> 
>>> In general you want to afford this complexity and for example use a simple 
>>> init() method that blocks until init is done. However, the delegate has 
>>> some nice qualities if you switch more often than just at init.
>>> 
>>> Kind regards,
>>> 
>>> Peter Kriens
>>> 
 On 22 Jul 2018, at 10:35, David Leangen via osgi-dev 
  wrote:
 
 
 Hi,
 
 This may be more of a basic Java question, but I’ll ask it anyway because 
 it relates to “bouncing” and the handling of dynamic behavior.
 
 In my @Activate method, I configure my component. Since the configuration 
 may be long-running (data is retrieved remotely), I use a Promise. But, 
 the component is available before it is actually “ready”. So far, this has 
 not been a problem.
 
 It looks something like this:
 
 @Reference private Store dataStore;
 
 @Activate
 void activate() {
 configure(dataStore);
 }
 
 void configure(Store withDataStore) {
 // Configuration is set up via a Promise, using a data store to retrieve 
 the data
 }
 
 However, because there is some bouncing occurring, I think what is 
 happening is that configure() starts running in a different thread, but in 
 the meantime the reference to the dataStore is changed. The error log 
 shows that the data store is in an impossible state. After following a 
 hunch, I could confirm that the configureData process is running on a data 
 store service that was deactivated during bouncing.
 
 What would be a good (and simple) strategy to handle this type of 
 long-running configuration, where the configuration is in a different 
 thread and depends on services that may come and go?
 
 
 Note: in the end, the component gets configured and the application runs, 
 but I would still like to be able to handle this situation properly.
 
 
 Thanks!
 

Re: [osgi-dev] Service binding order

2018-07-18 Thread Tim Ward via osgi-dev
Promises are great, and should be used much more often! 

Note that if you want to have more control of the threading then you can 
instantiate a PromiseFactory with the executor that should be used to run 
callbacks. In this case, for example, you may wish to use an Immediate executor 
(available as a static method on PromiseFactory) to ensure that the callbacks 
are always run without a thread switch.

Best Regards,

Tim

> On 18 Jul 2018, at 07:50, David Leangen via osgi-dev  
> wrote:
> 
> 
> Brilliant! Thank you. :-)
> 
> 
>> On Jul 18, 2018, at 14:46, Peter Kriens > > wrote:
>> 
>> A really elegant solution to these problems is to use a Promise …
>> 
>> 1) Create a Deferrred
>> 2) Execute your item code through the promise of the deferred
>> 3) When the Executor reference is set, you resolve the deferred
>> 
>> 
>>  @Component
>>  public class Foo {
>>  Deferred  deferred = new Deferred<>();
>> 
>>  @Reference
>>  void setExecutor( Executor e) { deferred.resolve(e); }
>> 
>>  @Reference( multiple/dynmaic) 
>>  void addItem( Item item) {
>>  deferred.getPromise().thenAccept ( executor -> … )
>>  }
>>  }
>>  
>> This will automatically process your items after the executor is set. It 
>> think it also easily extends to multiple dependencies but would have to 
>> puzzle a bit. If you’re unfamiliar with Promises, I’ve written an app note, 
>> ehh blog, recently about 1.1 Promises   
>> http://aqute.biz/2018/06/28/Promises.html 
>> . They really shine in these 
>> ordering issues.
>> 
>> Kind regards,
>> 
>>  Peter Kriens
>> 
>> 
>> 
>>> On 18 Jul 2018, at 00:16, David Leangen via osgi-dev 
>>> mailto:osgi-dev@mail.osgi.org>> wrote:
>>> 
>>> 
>>> Hi!
>>> 
>>> I have a component that acts a bit like a whiteboard provider. It looks 
>>> something like this:
>>> 
>>> public class MyWhiteboard
>>> {
>>>  boolean isActive;
>>> 
>>>  @Reference MyExecutor executor; // Required service to execute on an Item
>>> 
>>>  @Reference(multiple/dynamic)
>>>  void bindItem( Item item )
>>>  {
>>>if (isActivated)
>>>  // add the Item
>>>else
>>>  // Store the item to be added once this component is activated
>>>  }
>>> 
>>>  void unbindItem( Item item )
>>>  {
>>>// Remove the item
>>>  }
>>> 
>>>  @Activate
>>>  void activate()
>>>  {
>>>// execute non-processed Items
>>>isActivate = true;
>>>  }
>>> }
>>> 
>>> The MyExecutor must be present before an Item can be processed, but there 
>>> is no guarantee as to the binding order. All I can think of doing is 
>>> ensuring that the Component is Activated before processing.
>>> 
>>> My question is: is there a more elegant / simpler / less error prone way of 
>>> accomplishing this?
>>> 
>>> 
>>> Thanks!
>>> =David
>>> 
>>> 
>>> ___
>>> OSGi Developer Mail List
>>> osgi-dev@mail.osgi.org 
>>> https://mail.osgi.org/mailman/listinfo/osgi-dev 
>>> 
>> 
> 
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Tool/API to analyze component dependencies

2018-07-17 Thread Tim Ward via osgi-dev
Hi Alain

It’s not possible to model this accurately at build time because services may 
come and go at runtime, and their service properties may change with 
configuration. Added to this it is also possible to change the target filters 
on references which changes/restricts what they may bind to.

Another potential pitfall with the static approach is that not all services are 
provided with DS. At the very least you would also need to analyse the 
Provide-Capability header to find other services.

In summary, the best that you can do with static analysis is to come up with a 
guess of how things might wire. The Runtime DTOs will tell you how things have 
actually wired.

Best Regards,

Tim

> On 17 Jul 2018, at 13:14, BJ Hargrave via osgi-dev  
> wrote:
> 
> Yes, that is at runtime.
> --
> 
> BJ Hargrave
> Senior Technical Staff Member, IBM // office: +1 386 848 1781
> OSGi Fellow and CTO of the OSGi Alliance // mobile: +1 386 848 3788
> hargr...@us.ibm.com
>  
>  
> - Original message -
> From: Alain Picard 
> To: hargr...@us.ibm.com
> Cc: osgi-dev@mail.osgi.org
> Subject: Re: [osgi-dev] Tool/API to analyze component dependencies
> Date: Tue, Jul 17, 2018 8:07 AM
>  
> Thanks BJ, from what I see that will do the trick, expect that it is runtime, 
> but I can live with that.
>  
> Alain
>  
> On Tue, Jul 17, 2018 at 7:58 AM BJ Hargrave  > wrote:
> Look at the ServiceComponentRuntime service: 
> https://osgi.org/specification/osgi.cmpn/7.0.0/service.component.html#service.component-introspection
>  
> 
>  
> It provides access to DTOs which describe each component description, 
> ComponentDescriptionDTO, and actual component instances, 
> ComponentConfigurationDTO. You can follows the ReferenceDTOs to see the 
> dependency graph.
>  
>  
> --
> 
> BJ Hargrave
> Senior Technical Staff Member, IBM // office: +1 386 848 1781
> OSGi Fellow and CTO of the OSGi Alliance // mobile: +1 386 848 3788
> hargr...@us.ibm.com 
>  
>  
> - Original message -
> From: Alain Picard via osgi-dev  >
> Sent by: osgi-dev-boun...@mail.osgi.org 
> 
> To: osgi-dev@mail.osgi.org 
> Cc:
> Subject: [osgi-dev] Tool/API to analyze component dependencies
> Date: Tue, Jul 17, 2018 6:27 AM
>  
> As I'm going through our migration to DS I am in need of understanding our 
> component "graph" and to make sure there are no cycles. For the core ones, I 
> manually created of small dot file from the @Reference and used graphviz to 
> render.
>  
> Now I am looking for some API to automate the process, at dev time if 
> possible. AFAIK, DS will read the component.xml and create a registry with 
> all info. Can I make use of this resolution to grab all the info that I need?
>  
> Thanks
> Alain
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org 
> https://mail.osgi.org/mailman/listinfo/osgi-dev 
> 
>  
>  
> 
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

  1   2   >