Re: [osgi-dev] Push Streams Event grouping and batching

2019-02-06 Thread Tim Ward via osgi-dev
So having dug further, the following is sufficient:

Promise>> result = psp.buildStream(sps)
.withExecutor(Executors.newFixedThreadPool(2))
.build()
.asyncMap(2, 0, new StreamDebouncer(
new 
PromiseFactory(PromiseFactory.inlineExecutor()), 
10, Duration.ofSeconds(1)))
.filter(c -> !c.isEmpty())
.collect(Collectors.toList());

Note the “withExecutor” call. The default executor has only one thread (because 
the default parallelism is 1), this is used to send events down the pipe. In 
this case the close event is sent and reaches the asyncMap operation. The close 
event then waits for the promises from the ongoing asyncMaps to complete.

In a separate thread the final debounce window is closed because it times out. 
This triggers the gathering of the batch of data and tries to send it on to the 
rest of the stream using the sending thread which is blocked trying to send the 
close event! This deadlocks because the data cannot be sent until the sender 
thread is available, and the sender thread is waiting for the data event to 
have been sent before continuing.

The simple fix is to provide a second thread in the executor. This does not 
result in events arriving out of order because the parallelism of the stream is 
still one. This deadlock is also only triggered by terminal events, which are 
the only kind that can block a sender thread, so no data is harmed. Also, any 
number of sender threads greater than one resolves the problem regardless of 
the applied parallelism.

I hope this helps,

Tim

> On 6 Feb 2019, at 14:57, Tim Ward  wrote:
> 
> Hi Alain,
> 
> Having applied some step by step debugging the issue I can see is actually a 
> deadlock in the async worker pool - we call asyncMap with a parallelism of 2, 
> but we use the default executor from the push stream, which has only one 
> thread.
> 
> If the timing is wrong then we end up with the final debounce window timing 
> out and trying to complete while the one pushing thread is blocking waiting 
> for the timeout to be sent.
> 
> I’ll dig a little further, but the probable answer is just that one of the 
> thread pools needs an extra thread (or possibly that the inline executor 
> needs to be changed).
> 
> Best Regards,
> 
> Tim
> 
>> On 6 Feb 2019, at 14:36, Alain Picard > > wrote:
>> 
>> Jurgen,
>> 
>> Thanks for the clarifications. I also did about the same change to match our 
>> standard stream implementation, and didn't get the issue, but I was too much 
>> in the dark to make any meaningful conclusions.
>> 
>> I was looking to modify the test to verify various scenarios. So I guess 
>> I'll make those changes and proceed from there.
>> 
>> Thanks
>> Alain
>> 
>> 
>> On Wed, Feb 6, 2019 at 9:21 AM Jürgen Albert > > wrote:
>> Creating the PushStream with an exclusive Executor fixes the Problem.
>> 
>> Promise>> result = psp.buildStream(sps)
>> .withBuffer(new ArrayBlockingQueue> Integer>>(32))
>> .withExecutor(Executors.newCachedThreadPool()).build()
>> .asyncMap(5, 0, new StreamDebouncer(
>> new PromiseFactory(PromiseFactory.inlineExecutor()), 
>> 10, Duration.ofSeconds(1)))
>> .filter(c -> !c.isEmpty())
>> .collect(Collectors.toList());
>> 
>> Am 06/02/2019 um 15:14 schrieb Jürgen Albert:
>>> Hi Alain,
>>> 
>>> the issue has a couple of reasons:
>>> 
>>> The Pushstream and eventsource have by default 2 queues with a size of 32 
>>> each. The default pushback policy is linear. This means, that when 10 
>>> events are in the buffer a pushback of 10 ms will be given to the 
>>> eventsource. This means, that the eventsource will wait this time, before 
>>> it sends the next event downstream. This default behaviour can cause long 
>>> processing times, especially when a lot of events are written in a for 
>>> loop. This means that the queues fill up very quick even if the actual 
>>> processing time of the code of yours is close to 0. Use e.g. the 
>>> ON_FULL_FIXED policy to get ride of this problem.
>>> 
>>> As far as I understand the bouncer, it waits a second, before it returns a 
>>> list, if no new events are coming. Thus a sleep time of less then a second 
>>> or even 1,5 (together with the pushback thing I described before) will keep 
>>> the stream busy longer then your sleep time for a batch. Thus all the 
>>> batches return when hitting the max size, except the last one. This waits 
>>> and for some threading reasons, the last deferred is blocked from 
>>> resolving, which in turn blocks eventsource close. If you add a small wait 
>>> before the close is called everything is fine. 

Re: [osgi-dev] Push Streams Event grouping and batching

2019-02-06 Thread Tim Ward via osgi-dev
Hi Alain,

Having applied some step by step debugging the issue I can see is actually a 
deadlock in the async worker pool - we call asyncMap with a parallelism of 2, 
but we use the default executor from the push stream, which has only one thread.

If the timing is wrong then we end up with the final debounce window timing out 
and trying to complete while the one pushing thread is blocking waiting for the 
timeout to be sent.

I’ll dig a little further, but the probable answer is just that one of the 
thread pools needs an extra thread (or possibly that the inline executor needs 
to be changed).

Best Regards,

Tim

> On 6 Feb 2019, at 14:36, Alain Picard  wrote:
> 
> Jurgen,
> 
> Thanks for the clarifications. I also did about the same change to match our 
> standard stream implementation, and didn't get the issue, but I was too much 
> in the dark to make any meaningful conclusions.
> 
> I was looking to modify the test to verify various scenarios. So I guess I'll 
> make those changes and proceed from there.
> 
> Thanks
> Alain
> 
> 
> On Wed, Feb 6, 2019 at 9:21 AM Jürgen Albert  > wrote:
> Creating the PushStream with an exclusive Executor fixes the Problem.
> 
> Promise>> result = psp.buildStream(sps)
> .withBuffer(new ArrayBlockingQueue Integer>>(32))
> .withExecutor(Executors.newCachedThreadPool()).build()
> .asyncMap(5, 0, new StreamDebouncer(
> new PromiseFactory(PromiseFactory.inlineExecutor()), 
> 10, Duration.ofSeconds(1)))
> .filter(c -> !c.isEmpty())
> .collect(Collectors.toList());
> 
> Am 06/02/2019 um 15:14 schrieb Jürgen Albert:
>> Hi Alain,
>> 
>> the issue has a couple of reasons:
>> 
>> The Pushstream and eventsource have by default 2 queues with a size of 32 
>> each. The default pushback policy is linear. This means, that when 10 events 
>> are in the buffer a pushback of 10 ms will be given to the eventsource. This 
>> means, that the eventsource will wait this time, before it sends the next 
>> event downstream. This default behaviour can cause long processing times, 
>> especially when a lot of events are written in a for loop. This means that 
>> the queues fill up very quick even if the actual processing time of the code 
>> of yours is close to 0. Use e.g. the ON_FULL_FIXED policy to get ride of 
>> this problem.
>> 
>> As far as I understand the bouncer, it waits a second, before it returns a 
>> list, if no new events are coming. Thus a sleep time of less then a second 
>> or even 1,5 (together with the pushback thing I described before) will keep 
>> the stream busy longer then your sleep time for a batch. Thus all the 
>> batches return when hitting the max size, except the last one. This waits 
>> and for some threading reasons, the last deferred is blocked from resolving, 
>> which in turn blocks eventsource close. If you add a small wait before the 
>> close is called everything is fine. 
>> 
>> The blocking issue is interesting non the less, but my experience is that 
>> these kind of tests are often harsher then reality.
>> 
>> Regards, 
>> 
>> Jürgen.
>> 
>> Am 05/02/2019 um 23:58 schrieb Alain Picard:
>>> Tim,
>>> 
>>> Finally got around to this debouncer, and I tested to change the sleep 
>>> time. When I set it to like 800 to 1500, it never completes after shoing 
>>> "Closing the Generator". At 500, I get a Queue full that I can understand. 
>>> So why the hang?
>>> 
>>> Alain
>>> 
>>> 
>>> 
>>> On Mon, Jan 7, 2019 at 8:11 AM Tim Ward >> > wrote:
>>> This use case is effectively a “debouncing” behaviour, which is possible to 
>>> implement with a little thought.
>>> 
>>> There are a couple of ways to attempt it. This one uses the asyncMap 
>>> operation to asynchronously gather the events until it either times out the 
>>> promise or it hits the maximum stream size. Note that you have to filter 
>>> out the “empty” lists that are used to resolve the promises which are being 
>>> aggregated into the window. The result of this is a window which starts on 
>>> the first event arrival and then buffers the events for a while. The next 
>>> window isn’t started until the next event
>>> 
>>> 
>>> Best Regards,
>>> 
>>> Tim
>>> 
>>> 
>>> @Test
>>> public void testWindow2() throws InvocationTargetException, 
>>> InterruptedException {
>>> 
>>> PushStreamProvider psp = new PushStreamProvider();
>>> 
>>> SimplePushEventSource sps = 
>>> psp.createSimpleEventSource(Integer.class);
>>> 
>>> Promise>> result = 
>>> psp.createStream(sps)
>>> .asyncMap(2, 0, new StreamDebouncer(
>>> new 
>>> PromiseFactory(PromiseFactory.inlineExecutor()), 
>>> 10, Duration.ofSeconds(1)))
>>> .filter(c -> 

Re: [osgi-dev] Push Streams Event grouping and batching

2019-02-06 Thread Alain Picard via osgi-dev
Jurgen,

Thanks for the clarifications. I also did about the same change to match
our standard stream implementation, and didn't get the issue, but I was too
much in the dark to make any meaningful conclusions.

I was looking to modify the test to verify various scenarios. So I guess
I'll make those changes and proceed from there.

Thanks
Alain


On Wed, Feb 6, 2019 at 9:21 AM Jürgen Albert 
wrote:

> Creating the PushStream with an exclusive Executor fixes the Problem.
>
> Promise>> result = psp.buildStream(sps)
> .withBuffer(new ArrayBlockingQueue Integer>>(32))
> .withExecutor(Executors.newCachedThreadPool()).build()
> .asyncMap(5, 0, new StreamDebouncer(
> new
> PromiseFactory(PromiseFactory.inlineExecutor()),
> 10, Duration.ofSeconds(1)))
> .filter(c -> !c.isEmpty())
> .collect(Collectors.toList());
>
> Am 06/02/2019 um 15:14 schrieb Jürgen Albert:
>
> Hi Alain,
>
> the issue has a couple of reasons:
>
> The Pushstream and eventsource have by default 2 queues with a size of 32
> each. The default pushback policy is linear. This means, that when 10
> events are in the buffer a pushback of 10 ms will be given to the
> eventsource. This means, that the eventsource will wait this time, before
> it sends the next event downstream. This default behaviour can cause long
> processing times, especially when a lot of events are written in a for
> loop. This means that the queues fill up very quick even if the actual
> processing time of the code of yours is close to 0. Use e.g. the
> ON_FULL_FIXED policy to get ride of this problem.
>
> As far as I understand the bouncer, it waits a second, before it returns a
> list, if no new events are coming. Thus a sleep time of less then a second
> or even 1,5 (together with the pushback thing I described before) will keep
> the stream busy longer then your sleep time for a batch. Thus all the
> batches return when hitting the max size, except the last one. This waits
> and for some threading reasons, the last deferred is blocked from
> resolving, which in turn blocks eventsource close. If you add a small wait
> before the close is called everything is fine.
>
> The blocking issue is interesting non the less, but my experience is that
> these kind of tests are often harsher then reality.
>
> Regards,
>
> Jürgen.
>
> Am 05/02/2019 um 23:58 schrieb Alain Picard:
>
> Tim,
>
> Finally got around to this debouncer, and I tested to change the sleep
> time. When I set it to like 800 to 1500, it never completes after shoing
> "Closing the Generator". At 500, I get a Queue full that I can understand.
> So why the hang?
>
> Alain
>
>
>
> On Mon, Jan 7, 2019 at 8:11 AM Tim Ward  wrote:
>
>> This use case is effectively a “debouncing” behaviour, which is possible
>> to implement with a little thought.
>>
>> There are a couple of ways to attempt it. This one uses the asyncMap
>> operation to asynchronously gather the events until it either times out the
>> promise or it hits the maximum stream size. Note that you have to filter
>> out the “empty” lists that are used to resolve the promises which are being
>> aggregated into the window. The result of this is a window which starts on
>> the first event arrival and then buffers the events for a while. The next
>> window isn’t started until the next event
>>
>>
>> Best Regards,
>>
>> Tim
>>
>>
>> @Test
>> public void testWindow2() throws InvocationTargetException,
>> InterruptedException {
>>
>> PushStreamProvider psp = new PushStreamProvider();
>>
>> SimplePushEventSource sps = psp.createSimpleEventSource(Integer.
>> class);
>>
>> Promise>> result = psp.createStream(sps)
>> .asyncMap(2, 0, new StreamDebouncer(
>> new PromiseFactory(PromiseFactory.inlineExecutor()),
>> 10, Duration.ofSeconds(1)))
>> .filter(c -> !c.isEmpty())
>> .collect(Collectors.toList());
>>
>> new Thread(() -> {
>>
>> for (int i = 0; i < 200;) {
>>
>> for (int j = 0; j < 23; j++) {
>> sps.publish(i++);
>> }
>>
>> try {
>> System.out.println("Burst finished, now at " + i);
>> Thread.sleep(2000);
>> } catch (InterruptedException e) {
>> sps.error(e);
>> break;
>> }
>> }
>>
>> System.out.println("Closing generator");
>> sps.close();
>>
>> }).start();
>>
>> System.out.println(result.getValue().toString());
>>
>> }
>>
>>
>> public static class StreamDebouncer implements Function> extends Collection>> {
>>
>> private final PromiseFactory promiseFactory;
>> private final int maxSize;
>> private final Duration maxTime;
>>
>>
>> private final Object lock = new Object();
>>
>>
>> private List currentWindow;
>> private Deferred> currentDeferred;
>>
>> public StreamDebouncer(PromiseFactory promiseFactory, int maxSize,
>> Duration maxTime) {
>> this.promiseFactory = promiseFactory;
>> this.maxSize = maxSize;
>> this.maxTime = maxTime;
>> }
>>
>> @Override
>> public Promise> apply(T t) throws Exception {
>>
>>
>> Deferred> deferred = null;
>> 

Re: [osgi-dev] Push Streams Event grouping and batching

2019-02-06 Thread Jürgen Albert via osgi-dev

Creating the PushStream with an exclusive Executor fixes the Problem.

Promise>> result = psp.buildStream(sps)
                .withBuffer(new ArrayBlockingQueueInteger>>(32))

.withExecutor(Executors.newCachedThreadPool()).build()
                .asyncMap(5, 0, new StreamDebouncer(
                        new 
PromiseFactory(PromiseFactory.inlineExecutor()),

                        10, Duration.ofSeconds(1)))
                .filter(c -> !c.isEmpty())
                .collect(Collectors.toList());

Am 06/02/2019 um 15:14 schrieb Jürgen Albert:

Hi Alain,

the issue has a couple of reasons:

The Pushstream and eventsource have by default 2 queues with a size of 
32 each. The default pushback policy is linear. This means, that when 
10 events are in the buffer a pushback of 10 ms will be given to the 
eventsource. This means, that the eventsource will wait this time, 
before it sends the next event downstream. This default behaviour can 
cause long processing times, especially when a lot of events are 
written in a for loop. This means that the queues fill up very quick 
even if the actual processing time of the code of yours is close to 0. 
Use e.g. the ON_FULL_FIXED policy to get ride of this problem.


As far as I understand the bouncer, it waits a second, before it 
returns a list, if no new events are coming. Thus a sleep time of less 
then a second or even 1,5 (together with the pushback thing I 
described before) will keep the stream busy longer then your sleep 
time for a batch. Thus all the batches return when hitting the max 
size, except the last one. This waits and for some threading reasons, 
the last deferred is blocked from resolving, which in turn blocks 
eventsource close. If you add a small wait before the close is called 
everything is fine.


The blocking issue is interesting non the less, but my experience is 
that these kind of tests are often harsher then reality.


Regards,

Jürgen.

Am 05/02/2019 um 23:58 schrieb Alain Picard:

Tim,

Finally got around to this debouncer, and I tested to change the 
sleep time. When I set it to like 800 to 1500, it never completes 
after shoing "Closing the Generator". At 500, I get a Queue full that 
I can understand. So why the hang?


Alain



On Mon, Jan 7, 2019 at 8:11 AM Tim Ward > wrote:


This use case is effectively a “debouncing” behaviour, which is
possible to implement with a little thought.

There are a couple of ways to attempt it. This one uses the
asyncMap operation to asynchronously gather the events until it
either times out the promise or it hits the maximum stream size.
Note that you have to filter out the “empty” lists that are used
to resolve the promises which are being aggregated into the
window. The result of this is a window which starts on the first
event arrival and then buffers the events for a while. The next
window isn’t started until the next event


Best Regards,

Tim


@Test
public void testWindow2() throws InvocationTargetException,
InterruptedException {

PushStreamProvider psp = new PushStreamProvider();

SimplePushEventSource sps =
psp.createSimpleEventSource(Integer.class);

Promise>> result = psp.createStream(sps)
.asyncMap(2, 0, new StreamDebouncer(
new PromiseFactory(PromiseFactory.inlineExecutor()),
10, Duration.ofSeconds(1)))
.filter(c -> !c.isEmpty())
.collect(Collectors.toList());

new Thread(() -> {

for (int i = 0; i < 200;) {

for (int j = 0; j < 23; j++) {
sps.publish(i++);
}

try {
System.out.println("Burst finished, now at " + i);
Thread.sleep(2000);
} catch (InterruptedException e) {
sps.error(e);
break;
}
}

System.out.println("Closing generator");
sps.close();

}).start();

System.out.println(result.getValue().toString());

}


public static class StreamDebouncer implements Function>> {

private final PromiseFactory promiseFactory;
privatefinalintmaxSize;
private final Duration maxTime;


private final Object lock = new Object();


privateList currentWindow;
private Deferred> currentDeferred;

public StreamDebouncer(PromiseFactory promiseFactory, int
maxSize, Duration maxTime) {
this.promiseFactory= promiseFactory;
this.maxSize = maxSize;
this.maxTime = maxTime;
}

@Override
public Promise> apply(T t) throws Exception {


Deferred> deferred = null;
Collection list = null;
booleanhitMaxSize= false;
synchronized(lock) {
if(currentWindow== null) {
currentWindow = new ArrayList<>(maxSize);
currentDeferred= promiseFactory.deferred();
deferred= currentDeferred;
list= currentWindow;
}
currentWindow.add(t);
if(currentWindow.size() == maxSize) {
hitMaxSize= true;
deferred= currentDeferred;
currentDeferred= null;
list= currentWindow;
currentWindow= null;
}
}


if(deferred 

Re: [osgi-dev] Push Streams Event grouping and batching

2019-02-06 Thread Jürgen Albert via osgi-dev

Hi Alain,

the issue has a couple of reasons:

The Pushstream and eventsource have by default 2 queues with a size of 
32 each. The default pushback policy is linear. This means, that when 10 
events are in the buffer a pushback of 10 ms will be given to the 
eventsource. This means, that the eventsource will wait this time, 
before it sends the next event downstream. This default behaviour can 
cause long processing times, especially when a lot of events are written 
in a for loop. This means that the queues fill up very quick even if the 
actual processing time of the code of yours is close to 0. Use e.g. the 
ON_FULL_FIXED policy to get ride of this problem.


As far as I understand the bouncer, it waits a second, before it returns 
a list, if no new events are coming. Thus a sleep time of less then a 
second or even 1,5 (together with the pushback thing I described before) 
will keep the stream busy longer then your sleep time for a batch. Thus 
all the batches return when hitting the max size, except the last one. 
This waits and for some threading reasons, the last deferred is blocked 
from resolving, which in turn blocks eventsource close. If you add a 
small wait before the close is called everything is fine.


The blocking issue is interesting non the less, but my experience is 
that these kind of tests are often harsher then reality.


Regards,

Jürgen.

Am 05/02/2019 um 23:58 schrieb Alain Picard:

Tim,

Finally got around to this debouncer, and I tested to change the sleep 
time. When I set it to like 800 to 1500, it never completes after 
shoing "Closing the Generator". At 500, I get a Queue full that I can 
understand. So why the hang?


Alain



On Mon, Jan 7, 2019 at 8:11 AM Tim Ward > wrote:


This use case is effectively a “debouncing” behaviour, which is
possible to implement with a little thought.

There are a couple of ways to attempt it. This one uses the
asyncMap operation to asynchronously gather the events until it
either times out the promise or it hits the maximum stream size.
Note that you have to filter out the “empty” lists that are used
to resolve the promises which are being aggregated into the
window. The result of this is a window which starts on the first
event arrival and then buffers the events for a while. The next
window isn’t started until the next event


Best Regards,

Tim


@Test
public void testWindow2() throws InvocationTargetException,
InterruptedException {

PushStreamProvider psp = new PushStreamProvider();

SimplePushEventSource sps =
psp.createSimpleEventSource(Integer.class);

Promise>> result = psp.createStream(sps)
.asyncMap(2, 0, new StreamDebouncer(
new PromiseFactory(PromiseFactory.inlineExecutor()),
10, Duration.ofSeconds(1)))
.filter(c -> !c.isEmpty())
.collect(Collectors.toList());

new Thread(() -> {

for (int i = 0; i < 200;) {

for (int j = 0; j < 23; j++) {
sps.publish(i++);
}

try {
System.out.println("Burst finished, now at " + i);
Thread.sleep(2000);
} catch (InterruptedException e) {
sps.error(e);
break;
}
}

System.out.println("Closing generator");
sps.close();

}).start();

System.out.println(result.getValue().toString());

}


public static class StreamDebouncer implements Function>> {

private final PromiseFactory promiseFactory;
privatefinalintmaxSize;
private final Duration maxTime;


private final Object lock = new Object();


privateList currentWindow;
private Deferred> currentDeferred;

public StreamDebouncer(PromiseFactory promiseFactory, int maxSize,
Duration maxTime) {
this.promiseFactory= promiseFactory;
this.maxSize = maxSize;
this.maxTime = maxTime;
}

@Override
public Promise> apply(T t) throws Exception {


Deferred> deferred = null;
Collection list = null;
booleanhitMaxSize= false;
synchronized(lock) {
if(currentWindow== null) {
currentWindow = new ArrayList<>(maxSize);
currentDeferred= promiseFactory.deferred();
deferred= currentDeferred;
list= currentWindow;
}
currentWindow.add(t);
if(currentWindow.size() == maxSize) {
hitMaxSize= true;
deferred= currentDeferred;
currentDeferred= null;
list= currentWindow;
currentWindow= null;
}
}


if(deferred != null) {
if(hitMaxSize) {
// We must resolve this way round to avoid racing
// the timeout and ending up with empty lists in
// all the promises
deferred.resolve(Collections.emptyList());
return promiseFactory.resolved(list);
} else {
final Collection finalList = list;
return deferred.getPromise()
.timeout(maxTime.toMillis())
.recover(x -> {
synchronized (lock) {
if(currentWindow == finalList) {
currentWindow = null;
currentDeferred= null;
return finalList;
}
}
return 

Re: [osgi-dev] Remote service (thread) context properties?

2019-02-06 Thread Tim Ward via osgi-dev
Hi Bernd,

What you’re asking for isn’t a required part of the RSA standard, which means 
that providers don’t have to offer it. There is, however, room for it to exist 
within the standard.

OSGi Remote Services (and Remote Service Admin) define the concept of “intents” 
which are additional features or qualities of service that a distribution 
provider can offer. An example of an existing intent is the “osgi.async” 
intent. This intent is used to indicate that the distribution provider can 
handle asynchronous return types such as OSGi Promises and Java futures. OSGi 
services that need to be remoted can then require that the distribution 
provider offer the intent by advertising the “service.exported.intents” 
property in addition to the service.exported.interfaces property.

What you’re asking for is therefore an intent which provides security context 
along with the call. I’m not aware of any distribution provider that does this, 
but it would be possible for them to add it. If they did they should add an 
advertised intent indicating the support, and your service should then require 
the intent.

I’m not aware of any implementations that support security context flow in this 
way currently.

Best Regards,

Tim

> On 6 Feb 2019, at 13:13, Bernd Eckenfels via osgi-dev 
>  wrote:
> 
> I guess most is thread local, it would be good if extraction/marshaling and 
> transport and demarshalling/setting on both ends could be enhanced with 
> interceptors.
> 
> But maybe a provide specific interface is enough? Did you do it for Aeris RSA 
> Fastbin?
> 
> Gruss
> Bernd
> 
> Gruss
> Bernd
> --
> http://bernd.eckenfels.net
>  
> Von: Christian Schneider 
> Gesendet: Mittwoch, Februar 6, 2019 2:07 PM
> An: Bernd Eckenfels; OSGi Developer Mail List
> Betreff: Re: [osgi-dev] Remote service (thread) context properties?
>  
> JAAS is already standardised. So if the provider (like CXF SOAP or JAX-RS) 
> establishes a JAAS context on your thread then you can access it. I can 
> provide an example if you want.
> I think for open tracing there is also an API that can be used. 
> 
> I am not sure about the others like peer-address, audit, tenant and request 
> ids. 
> Do you have an idea how it can / should work in practice?
> 
> Christian
> 
> Am Mi., 6. Feb. 2019 um 03:08 Uhr schrieb Bernd Eckenfels via osgi-dev 
> mailto:osgi-dev@mail.osgi.org>>:
> When I use a Remote Service for distributed OSGi application I would like my 
> provider to be able to implicitly pass some thread context like tracing IDs 
> and also a user authorization token.
> 
> The OSGi compendium talks about implementation specific security based on 
> codesigning, but not on thread identity (JAAS Context). Was there any plan to 
> add something, like an interceptor mechanism?
> 
> Some of it could be implementation specific, but some form of portable 
> endpoint binding access would be nice, like peer-address, jaas-context, 
> opentracing-id, maybe audit, tenant and request-ids?
> 
> I can enrich my services with a Map for most of it, however 
> then there is no reliable way for the provider to add/ensure some of its 
> protocol header properties and it hides the business interface under removing 
> parameters.
> 
> Gruss
> Bernd
> --
> http://bernd.eckenfels.net 
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org 
> https://mail.osgi.org/mailman/listinfo/osgi-dev 
> 
> 
> -- 
> -- 
> Christian Schneider
> http://www.liquid-reality.de 
> 
> Computer Scientist
> http://www.adobe.com 
> 
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev

___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev

Re: [osgi-dev] Remote service (thread) context properties?

2019-02-06 Thread Christian Schneider via osgi-dev
JAAS is already standardised. So if the provider (like CXF SOAP or JAX-RS)
establishes a JAAS context on your thread then you can access it. I can
provide an example if you want.
I think for open tracing there is also an API that can be used.

I am not sure about the others like peer-address, audit, tenant and request
ids.
Do you have an idea how it can / should work in practice?

Christian

Am Mi., 6. Feb. 2019 um 03:08 Uhr schrieb Bernd Eckenfels via osgi-dev <
osgi-dev@mail.osgi.org>:

> When I use a Remote Service for distributed OSGi application I would like
> my provider to be able to implicitly pass some thread context like tracing
> IDs and also a user authorization token.
>
> The OSGi compendium talks about implementation specific security based on
> codesigning, but not on thread identity (JAAS Context). Was there any plan
> to add something, like an interceptor mechanism?
>
> Some of it could be implementation specific, but some form of portable
> endpoint binding access would be nice, like peer-address, jaas-context,
> opentracing-id, maybe audit, tenant and request-ids?
>
> I can enrich my services with a Map for most of it, however
> then there is no reliable way for the provider to add/ensure some of its
> protocol header properties and it hides the business interface under
> removing parameters.
>
> Gruss
> Bernd
> --
> http://bernd.eckenfels.net
> ___
> OSGi Developer Mail List
> osgi-dev@mail.osgi.org
> https://mail.osgi.org/mailman/listinfo/osgi-dev



-- 
-- 
Christian Schneider
http://www.liquid-reality.de

Computer Scientist
http://www.adobe.com
___
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev