Some of the mechanisms with the observable/subscriber.  But I do not know 
enough about it to comment in full.

 

I’d want to make sure though that under the covers whatever the mechanism is 
capable of talking with “legacy” systems like REST, SOAP, JMS, etc.  Just a 
standardized way to abstract away the details of the transports in the Camel 
from/to calls themselves.  

 

If you think about Camel itself and direct-vm/SEDA, one really doesn’t have to 
care or know much about how those are done in memory.  Or with JMS 
request/response queues is another example. One thinks about them almost as if 
they are a single request/response call while under the covers something 
different is happening.

 

What if something like that existed that had an interface or mechanism like the 
RxJava seems to promote but under the covers permitted configuration to/from 
other services. From a Camel route perspective, that would simply be a 
request/response call or an async call.

 

While a REST call by its very nature and transport is request/response, that 
doesn’t mean that I necessarily care about the response when making the call 
(other than failures which might be normalized).

 

The configuration would still have to happen at some level, obviously, but not 
at the level of Camel route definition. 

 

from(“camel-async:myReceivingEndpoint”)

from(“camel-sync:myOtherReceivingEndpoint”)

//same for “to” endpoints.

 

Obviously the various Camel “pipes” would need to be responsible for handling 
the mechanics of converting whatever that underlying mechanism is using. Some 
of that could be accomplished via current Camel routes inside those pipes. 

 

But it would push the configuration down one level and out of the routes 
themselves.  It would make switching from one type of transport to another a 
configuration change.  One example of the utility is in switching from test 
mode where async/sync might simply be direct-vm/SEDA mechanisms.  But when 
deployed they’d be the actual underlying REST, SOAP or JMS or ?? pipes. 

 

Right now this is hand waving on my part and until I actually have (or get) 
some time to sit down and prototype what that might be like to implement it’s 
difficult to say how practical or easy that would be to implement.

 

Some would have to be able to understand misconfiguration.  What do you do when 
a JMS request/response is mapped to an async call?  That wouldn’t make sense.  
One could potentially misconfigure that so does the “pipe” where it just throws 
away a returned payload away?  But that problem doesn’t get easier just by 
making all the connection details and flags explicit in the Camel route itself. 
If anything, those configuration details on from/to endpoints obscure the 
nature of what’s happening more than they help.  One can still have a 
from(“seda:xxx”).to(“someRequestReponse”).  That obviously doesn’t make sense. 
It might if there was some bean in between handling the returned data.

 

But if all I see in Camel routes is async or sync calls I can easily scan it to 
determine if something doesn’t look right and if the route isn’t behaving as I 
expect I know to look for the culprit in the pipe configuration itself.

 

 

 

 

From: Pratt, Jason [mailto:[email protected]] 
Sent: Tuesday, January 17, 2017 12:58 PM
To: [email protected]
Subject: RE: Opinionated...

 

Brad are you looking at doing some sort of RxJava here?

 

From: Brad Johnson [mailto:[email protected]] 
Sent: Saturday, January 14, 2017 5:20 AM
To: [email protected] <mailto:[email protected]> 
Subject: RE: Opinionated...

 

Scott,

 

It’s funny that you mention OSGi Remote Services as that was sort of in the 
back of my mind.   I think I recall Christian said he was working on a Remote 
Services implementation as well. But I don’t know enough about it yet to 
include it in the discussion.  I suspect what I’m going to do is set up a 
GitHub project with a basic project and then a set of appliances for a variety 
of enterprise integration purposes.  That baseline has to be in place before 
lower levels can be addressed

 

But I do think that a communications abstraction is going to be necessary.  A 
next generation of communications pipes should abstract protocols away from the 
OSGi programmer. Or as the great and powerful Oz put it “never mind the little 
man behind the curtain”.

 

When writing code in a Camel route in an OSGi bundle the programmer should 
either be thinking about using a named request/response pipe or an event pipe – 
IPs, ports, transports, and so on should happen via straight configuration 
files, parsed, configured and then registered as services to be picked up in 
bundles.  There isn’t anything that stands in the way of doing that now other 
than a little elbow grease.

 

So I’ll definitely give the Remote Services a deeper look.

 

 

From: Scott Lewis [mailto:[email protected]] 
Sent: Friday, January 13, 2017 7:16 PM
To: [email protected] <mailto:[email protected]> 
Subject: Re: Opinionated...

 

Hi Brad,

You might be interested in the OSGi Remote Services specification...which 
mentions the distributed computing fallacies in the introduction.   It's 
chapter 100 in the enterprise spec [1].

A big part of Remote Services is the ability to use OSGi service dynamics to 
'deal-with' distributed systems issues like partial failure (i.e. network is 
reliable).   For example, one way to represent the failure of a remote service 
would be to make the local service proxy go away.  Note that with OSGi service 
dynamics and (e.g.) DS, the consequences of such a thing on dependent services 
can be easily handled without introducing special mechanism.

IMO another advantage of Remote Services is that the OSGi service contract/impl 
separation also decouples the service from the distribution system.   This 
allows the service designer to create remote services (API and impl) that are 
independent of the distribution system's serialization format (e.g json, xml, 
obj serialization, etc) and comm approach/protocol (e.g. http/rest, pub/sub 
messaging, mqtt, tcp, etc).   As an example of this, I've created three 
Karaf-hosted remote services that allow remote monitoring and management of 
Karaf bundles, services, and install/uninstall of Karaf features...and these 
services can be accessed from remote Eclipse via an mqtt broker, or via 
server-based tcp, or via other distribution systems without changing the 
service APIs or implementation.

Scott

[1] https://www.osgi.org/developer/specifications/
[2] https://wiki.eclipse.org/Karaf_Remote_Management_with_Eclipse

On 1/13/2017 11:19 AM, Brad Johnson wrote:

That is certainly the sort of library that could be used as a standard. Get an 
agreement on the standard OSGi service interface and then use it and others for 
that implementation.  Which brings up a good question and issue.  There would 
have to be some set of standardized messages and exception types.  The 
CiruitBreaker example throws a CircuitBreakingException (naturally enough).  If 
there’s an ErrorHandlerService it would have to know the standard set of 
exceptions that could be expected or, at least, a set of parent classes.  Since 
CircuitBreakingException is a relatively simple class it would be perfect for a 
default ErrorHandlerService to catch for that class of exceptions.  

 

Obviously there will have to be some head scratching and chin rubbing about how 
the pieces fit together exactly.  The CircuitBreakerService (and the others 
too) could also be more like container classes that listen and pick up 
CircuitBreakerListenerService instances.  So one listener might just log the 
circuit breaker exception.  But you might instantiate an 
SMTPCircuitBreakerNotifcationService that implements the 
CircuitBreakerListenerService and fires off an email to an admin email address 
if the breaker is tripped. 

 

That CircuitBreakerService might also be picked by the Kontainer instance which 
listens for on/off control events from the outside world.  Some thinking to do 
there but they are tractable problems with services and events.

 

The main services like CircuitBreakerService and ThrottlerService might 
register themselves as providers with the ErrorHandlerService which would catch 
the types of exceptions they throw.  It in turn could listen for custom 
ExceptionHandlerListener<T> that listen for and handle specific exception 
types. Still thinking and hand waving about this but I think a sane set of 
standard services, listeners and events could be created that would permit a 
user to create simple handlers to register.

 

There would also be the issue of the issue of how to automate injection of 
those into the Camel routes.  That doesn’t seem like it should be a daunting 
challenge but it would be important.  And I think very important that those get 
injected automatically even if the services only provide basic logging 
initially with no client custom code.

 

From: James Carman [mailto:[email protected]] 
Sent: Friday, January 13, 2017 12:12 PM
To: [email protected] <mailto:[email protected]> 
Subject: Re: Opinionated...

 

Commons Lang3 has a pretty simple CircuitBreaker implementation that I used in 
Microbule:

https://github.com/Microbule/microbule/blob/master/decorator/circuitbreaker/src/main/java/org/microbule/decorator/circuitbreaker/CircuitBreakerFilter.java

On Fri, Jan 13, 2017 at 1:05 PM Brad Johnson <[email protected] 
<mailto:[email protected]> > wrote:

Folks,

 

I wanted to make sure that my promoting CDI, Camel Java DSL, & static profiles 
didn’t obscure the point I was trying to make.  Whatever mechanics we choose 
I’d really like us to be unified behind a common paradigm so that our 
documentation, exemplars, archetypes, blogs, libraries, and so on are all 
organized the same and use the same mechanics and layouts for projects. 

 

We should promote an idiomatic way to develop software using Karaf Boot.  
That’s one problem I hear from a lot of clients.  There are such cross-currents 
of information about how to develop OSGi-based software that it gets confusing. 
 Best or preferred practices are lost in the noise.  I won’t get into all that 
since I’m sure most of you have dealt this problem. Not to pick on it but a 
good example is that the Camel in Action book recommends using Pojos instead of 
using Processors/Exchanges.  It is on somewhere near the back of the book in a 
few pages. I don’t know how many examples on the web site actually use the 
Processor/Exchange but it is a lot. Then there are examples with Spring, 
Blueprint, Java DSL, Scala, etc.  There are annotations that only work in one 
environment but not in all of them.

 

By selecting an idiomatic and “opinionated” way of creating Karaf Boot 
microcontainers we could make sure that sort of confusion isn’t continued 
forward.  It would require a lot less documentation to cover the same ground 
and make editing and updating easier.  It would make creating sample and 
example projects a lot easier. It would simplify what Karaf Boot appliances 
have to support and make sure there aren’t concerns that work in one 
environment and not in another or that might work differently in a different 
environment.

 

I’m personally interested in Karaf Appliances with standard Maven structures, 
standard  bundle structures, and reference implementations that have a good 
chunk of the basic functionality. I’d say we take a page from the “convention 
over configuration” book or, at least, a “conventional configuration” and 
likely a bit of both. Because the appliances are focused on microservices we 
should get out ahead of the Gartner hype cycle.  Right now we are at the Peak 
of Inflated Expectations and in a couple of years we’ll be at the Trough of 
Disillusionment.  That disillusionment will come for a number of reasons. 
Flying Spaghetti Monster topology will be one of them but, more importantly for 
a Karaf Appliance, is the consistent problem of “network fallacies”.  Every 
Karaf Kontainer should have standard OSGi service interfaces and basic 
implementations that address each of the fallacies that apply to a uService.  
The Kontainers should insist on it and not make it optional. If the user 
doesn’t want that functionality they would then need to disable via 
configuration.  But the Kontainer will get stuck in a grace period and then 
fail if an expected, standard service isn’t available. All of the standard OSGi 
service APIs would have basic implementations to start but as more specific 
Kontainers.  But, because they are standard services new ones can be developed 
by the community or by the end developer.

 

As developers, we’ve all had to implement functionality and then come back and 
deal with error handling, security, etc. I say we simply cut those services in 
to the Kontainer right from the get-go.  The Kontainer doesn’t run if it 
doesn’t find the service.  That isn’t to say these become a fundamental part of 
Karaf but a fundamental part of the Kontainer service that runs in Karaf.

 

The standard bundles would only implement basic functionality and not do 
anything sophisticated.  New bundles and libraries for more sophisticated 
implementations could be added later. All of the bundles would likely have 
disable flags if the developer found the particular concern irrelevant.  For 
example, security might not be relevant. The following aren’t meant to be 
comprehensive. Just addressing key concerns. Other standards like 
LoggingService might be included by default as well. 

 

The intent here isn’t to define the exact mechanics but the standard OSGi 
service interface that would be _required_ in any implementation of a 
Kontainer, even if the implemented bundle is simply a passthrough or can be 
disabled, it forces the developer to explicitly deal with the problems or 
choose to ignore them altogether.  

 

Because these service interfaces and the bundles that implement them are 
standard, the set can be specified by the dependencies specified in the Maven 
build, features and/or profiles.

 

1.      The network is reliable.

A standard “Error Handler” OSGi service.  The default bundle would simply 
capture errors/exceptions and log them.  Perhaps it would specify retries. Drop 
in solutions might include errors going to dead letter queues and so on. The 
OSGi service interface is required for Kontainer bootstrap so use the default 
or use a standard one or create one of your own.  If they want to change 
configuration of this bundle or put in a new one, they know exactly what it is, 
where it exists, how it is specified to the build, and what configuration file 
is associated with it. No rummaging around through code.  When the inevitable 
error, exceptions and problems arise, the developer isn’t left wondering where 
and how they should add the functionality to handle it.

 

A standard “Circuit Breaker” service API and basic implemented bundle should be 
provided.  Perhaps the standard bundle would simply count errors over a time 
frame and shut down if that limit is hit and allow those values to be 
configured. Default would be a rather unsophisticated implementation but 
provide the convention and automated wiring of a circuit breaker OSGi service.  
Other implementations might fire off emails to Sys Admins or be combinations. 
And if it is really undesirable, set a disable flag.

 

2.      Latency is zero.

A standard OSGi Throttling service interface and bundle implementation would be 
included.  If you want different behavior, change it.  If you want to disable 
it, set the flag. However, there are bigger issues here that I’ll address a bit 
more down below.

 

3.      Bandwidth is infinite.

Throttling OSGi service again. Ditto to comment 2.

 

4.      The network is secure.

Standard OSGi service to plug in in various authentication/authorization 
mechanisms.  By default it might be pass through but also have a different 
implementation that uses a simple username/password. Obviously LDAP, JAAS, and 
other bundles could be created and dropped into place. 

 

5.      Topology doesn't change.

Back to the Circuit Breaker, logging and perhaps notification mechanism.  Also 
the transport issue below where I’ll mention some configuration.

 

6.      There is one administrator.

//No particular plugin for this but standardized configuration and expected 
bundles help and this also relates to the transport discussion.

 

7.      Transport cost is zero.

//Probably not a concern here directly but will be a big issue of uServices.

 

8.      The network is homogeneous.

//I think this issue can be dealt with in our context with many of the standard 
libraries but can be abstracted a bit more.

 

Obviously a big issue we’ll see, and I’ve seen in the past, is chained 
request/response calls. Service 1 making a REST call to service 2 making a REST 
call to service 3…etc.  And all of a sudden the latency is a killer.

 

ServiceMix/Karaf/Camel can already abstract away some of that via property 
substitution. I’d suggest we take that one step further and put _all_ 
transport/protocol information in configuration and create a standardized URI. 
As a developer or a senior developer over a group of developers, I don’t want 
them to be concerned with the fiddly bits of the transport in the code and 
routes and I certainly don’t want to recompile just to make such changes.

 

Akka, for  example, uses local URIs like akka://.  But a similar Karaf/Camel 
URI could be used and mapped via the configuration files.  So the developer 
would always use karaf:// in their routes and configuration mapping would use 
the URI specified.  karaf://myserviceName.  In the configuration file might be 
mapped a transport.configuration.cfg file.

 

I believe that is important for a lot of reasons.  A mid-level or junior-level 
developer shouldn’t be involved in configuration like:

" <ftp://foo@myserver/?> ftp://foo@myserver?password=secret&amp;

           recursive=true&amp;

           ftpClient.dataTimeout=30000&amp;

           ftpClientConfig.serverLanguageCode=fr"

 

So the cfg file might look like this:

clientService="ftp://foo@myserver?password=secret 
<ftp://foo@myserver?password=secret&;> &

           recursive=true&

           ftpClient.dataTimeout=30000&

           ftpClientConfig.serverLanguageCode=fr"

(At least properties get rid of the gawdaful escaped ampersands).

 

The code would then say “karaf://clientService”

 

One can do much of that via configuration right now but I think it is critical 
to move it completely to configuration so that admins know exactly what to 
change and where to find it when topologies change. It also means that when the 
backlash from microservice calling microservice calling microservice being slow 
happens, that simple mapping would permit things like going to JMS asynchronous 
request/response (or other fast, async mechanisms) that don’t swamp the virtual 
machine’s or Karaf instance resources. It would also allow for easy stubbing or 
mock testing of the Kontainer as it will be deployed without using PAX exam or 
other mechanism.

 

Creating standard OSGi service APIs in an anticipation of these problems would 
permit for an evolutionary approach to these problems in the future and 
specific solutions when a standard Kontainer is developed. Even standard error 
handler service implementations can be created.

 

Once such a basic, standard Kontainer exists, then uKontainers that implement 
basic functionality commonly used could be created.  There are JPA examples 
already.  But the average developer is going to be given a task to receive some 
canonical data model via a REST service and poke it into a database.  That 
database model probably won’t look like what they are receiving.  So a 
uKontainer that has a REST front end they can modify, a Dozer object mapping 
file in the middle with a transform, and a call to the database will be used 
repeatedly.

 

It may be that Oracle, MySQL, BerklyDB, and so on each endup with different 
error handler plugin implementations which are used with the same REST, 
mapping, JPA container. Just change the Maven dependency or profile.

 

There are a large number of examples like that.  In the case of that uKontainer 
there would like be a JPAErrorService for catching common errors and another 
for Dozer errors and for unmarshaling errors.  As a developer looking to solve 
very specific problems, I just download the uContainer and do the Dozer 
mapping, change some configuration and then test it.

 

That also means, that much like Camel EIPs, open source developers can focus on 
hardening these containers, fixing bugs, putting in performance enhancements 
and the like.  If a new error is coming from JPA that a user finds and isn’t 
being handled in a coherent fashion, then a new block or delegate code is added 
and released.  Just as we’d do with a Camel endpoint or component.

 

Having standard error handlers built into uKontainers would also help make 
coherent messages from the large and unwieldy stack traces full of reflection 
that we commonly see.  The error handler OSGi plugin for a given problem would 
be highly focused on identifying and reporting problems with a specific 
technology or set of technologies.

 

 

 

https://en.wikipedia.org/wiki/Fallacies_of_distributed_computing

 

Reply via email to