Hi ...

> On 1. May 2020, at 06:51, Jean-Baptiste Onofre <[email protected]> wrote:
> 
> [...]
>> 
>> - What currently is missing is a useful docker image supporting 
>> configuration via environment variables, changing user id and supporting 
>> sidecars like filebeat or loki promtail. - I already created a docker image 
>> with features like this. I can contribute it if wanted.
> 
> It’s already possible: you can populate and overwrite any config (located in 
> etc originally) with env variables. The format is pid:property=value, for 
> instance -Dorg.apache.karaf.log:foo=bar. I worked on this feature, the PR 
> will be open soon.
> 

But it's only possible with java environment (System.get...) and not 
System.env() --- ?

Default behaviour is to give environment variables or more comfortable included 
file (wordpress image is a good example). e.g. load all files in a special 
folder.

Even important is to change user id in start time - is needed. mac and linux 
have different default user ids (501 - 1000) and it's not possible to mount and 
write external resources easily.


>> 
>> - Even support for tracing collectors like jeager would help to integrate 
>> karaf in a kubernetes environment.
> 
> Not sure, but you can already use zipkin, dropwizard, etc, with Karaf 
> Decanter.
> 

I was trying decanter but did not see the goal if I already use ELK or loki to 
collect log files directly from karaf/data/log. But maybe I only didn't see it 
:(


>> 
>> - I already created a health and readiness check servlet using felix health 
>> framework and watching log messages. Maybe this could become part of karaf 
>> core.
> 
> Most probably part of Decanter IMHO.
> 
>> 
>> - Maybe it's possible to combine karaf with existing mesh tools to create 
>> fast kubernetes configurations with sidecars ect.
> 
> It sounds like good idea, +1.
> 
Could give you more information in few weeks about possibilities - currently I 
need to investigate in this topic.

>> 
>> - A common topic for clustering is a common cache and locking. It's possible 
>> with hazelcast. But I was not able use hazelcasts caching service - lots of 
>> class loader issues. I was able to use ehcache (no locking) and redis for 
>> this features. It's a shame that specially 'java caching api' is not 
>> possible in OSGi using hazelcast. It's also not easy with ehcache but I 
>> found a workaround.
> 
> Hazelcast works fine (we are using it in Cellar). Apache Ignite also provides 
> Karaf support. I think it makes sense to add an example/documentation/blog 
> about that. +1
> 
1) I tried out cellar using the tutorial but it was not working proper - Tried 
to use config and bundle synchronising in docker containers. Nodes were 
connected and see each other but not syncing.

2) I used the hazelcast delivered with cellar but it was not possible to use 
caching mechanisms. (see mail to Scott).

>> 
>> - Also common cron jobs are needed - should be executed only once in a 
>> cluster. Maybe it's possible to generate a kubernetes cronjob config instead 
>> of using a scheduler (maybe by using annotations).
>> 
> 
> Karaf includes a scheduler for cron and trigger (it’s powered by quartz 
> behind the hood). It exposes a service, so, we can add/improve new 
> implementation of the Karaf scheduler for k8s cronjob.
> 
It's a little bit complicated, some tasks need to run in every instance 
separately (e.g. cleanup of internal resources), others are alowed only once in 
the cluster. Solutions could be

1) A separate system with flexible control implementation for single (connected 
to local scheduler) and cluster (connected to k8s).

2) Add a separate annotation hint to the existing scheduled service. And the 
same behaviour as described in (1).

I prefer (2) to have separated clean services.

>> My background:
>> 
>> I'm woking for a telecomunication company even for process automatisation. 
>> Four years ago we decided to use karaf and JMS to create a decentralised 
>> environment. The control center is using vaadin as UI framework. Currently 
>> I'm planing to migrate the environment in a kubernetes cloud.
>> 
>> I found the presentation of Dimitry very helpful. I will try not to divide 
>> each service in a separate container. In this scenario it's also possible to 
>> share database entities in the same JVM engine - this will even boost the 
>> performance - instead of using it via rest all the time.
>> 
>> Also interesting is the support of API gateways. For me a big problem is the 
>> service discovery. It's absolutely stupid to configure each service manually 
>> in the API gateway. A more comfortable way would be some kind of discovery. 
>> Karaf could automatically configure/update the gateway depending on the 
>> provided jax rs resources.
> 
> 
> Thanks for sharing Mike and I like your ideas. About API Gateway, we started 
> a Karaf Vineyard PoC with discovery. I know that some companies are working 
> on a Gateway with discovery and pattern as well (Yupiik is working on Galaxy 
> gateway for instance).
> I would be more than happy to chat with you to move forward on your points 
> and improve Karaf !
> 
Currently I think about a project collecting registered JAX-RS and configure 
TYK automatically. All with flexible API. This solution would use existing 
resources. But this is a separate project outside of karaf. I will discuss this 
next week with my team. I could give you an update.


> By the way, I liked the tag line used by Toni: Karaf is modulith runtime 
> (meaning the next iteration after monolith and micro services) ;)

Yeah - concept of single services in the same modular engine - the practical 
way instead of evangelical.

Regards,
Mike

> Thanks again !
> Regards
> JB
> 
>> 
>> cu
>> 
>> Mike
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
> 

Attachment: signature.asc
Description: Message signed with OpenPGP

Reply via email to