> On 25 Sep 2018, at 14:39, Norbert Hartl <norb...@hartl.name> wrote:
> 
> 
> 
>> Am 25.09.2018 um 12:52 schrieb Sven Van Caekenberghe <s...@stfx.eu>:
>> 
>> Wow. Very nice, well done.
>> 
>> Any chance on some more technical details, as in what 'connected by a 
>> message queue for the communication' exactly means ? How did you approach 
>> micro services exactly ?
>> 
> Sure :)

Thanks, this is very interesting !

> The installation spawns multiple physical machines. All the machines are 
> joined to a docker swarm. The installation is reified as either task or 
> service from the view on the docker swarm. Meaning you instantiate an 
> arbitrary amount of services and docker swarm distributes them among the 
> physical machines. Usually you don’t take control which is running where but 
> you can. At this point you have spread dozens of pharo images among multiple 
> machines and each of them has an IP address. Furthermore in docker swarm you 
> have a reification of a network meaning that every instance in a network can 
> see all other instances on this network. Each service can be reached by its 
> service name in that network. Docker swarm does all the iptables/firewall and 
> DNS setup for you.

Are you happy with docker swarm's availability/fail-over behaviour ? In other 
words: does it work when one image/instance goes bad, does it detect and 
restore the missing functionality ?

> In order to have communication between those runtimes we use rabbitmq because 
> you were so nice writing a driver for it ;) The rabbitmq does have a support 
> for cluster setup, meaning each of the physical machines has a rabbitmq 
> installation and they know each other. So it does not matter to which 
> instance you send messages to and on which you register for receiving 
> messages. So every pharo image connects to the service rabbitmq and opens a 
> queue for interaction.

Same question: does RabbitMQ's clustering work well under stress/problems ? 
Syncing all queues between all machines sounds quite heavy (I never tried it, 
but maybe it just works).

> Each service like the car sharing opens a queue e.g. /queue/carSharing and 
> listens on it. The broker images are stateful so they open queues like 
> /queue/mobility-map-afdeg32 where afdeg32 is the container id of the instance 
> (hostname in docker). In each request the queue name to reply is sent as a 
> header. So we can make sure that the right image gets the message back. This 
> way we can have sticky sessions keeping volatile data in memory for the 
> lifecycle of a session. There is one worker image which opens a queue 
> /queue/mobility-map where session independent requests can be processed. 

I think I understand ;-)

> In order to ease development we are sharing code between the broker and the 
> micro service. Each micro service has a -Common package where the classes are 
> in that build the interface. The classes in here are a kind of data entity 
> facades. They use NeoJSON to map to and from a stream. The class name is send 
> with the message as a header so the remote side knows what to materialize. 
> The handling is unified for the four cases 
> 
> - Request as inquiry to another micro service
> - Response returns values to a Request
> - Error is transferred like a Response but is then signalled on the receiving 
> side
> - Notification connects the announcers on the broker and the micro service 
> side.

Yes, makes total sense.

> Asynchronous calls we solved using Promises and Futures. Each async call to 
> the Q becomes a promise (that blocks on #value) and is combined to a future 
> value containing all promises with support to generate a delta of all 
> resolved promises. This we need because you issue a search that takes longer 
> and you want to display results as soon as they are resolved not after all 
> haven been resolved.

Which Promise/Future framework/library are you using in Pharo ?

You did not go for single threaded worker images ?

> And a lot more. This is a coarse grained overview over the architecture. I’m 
> happy to answer further questions about this.
> 
> Norbert
> 
>>> On 25 Sep 2018, at 12:20, Norbert Hartl <norb...@hartl.name> wrote:
>>> 
>>> As presented on ESUG here is the brief description of one of our current 
>>> projects. 
>>> 
>>> Mobility Map
>>> ——————
>>> 
>>> Mobility Map is a broker for mobility services. It offers multi-modal 
>>> routing search enabling users to find the best travel options between 
>>> locations. Travel options include car sharing, bikes, trains, busses etc. 
>>> Rented cars can be offered for ride sharing on booking time letting other 
>>> people find it to participate in the ride. Single travel options are 
>>> combined in travel plans that can be booked and managed in a very easy way. 
>>> 
>>> For this project main requirements were scalability to serve a large user 
>>> base and flexibility to add more additional providers to the broker. The 
>>> application has been realized using web technologies for the frontend and 
>>> pharo for the backend. Using a microservice architecture combined with a 
>>> broker it is easy to extend the platform with additional brokers. 
>>> Deployment is done using docker swarm for distributing dozens of pharo 
>>> images among multiple server machines connected by a message queue for the 
>>> communication. Pharo supported that scenario very well enabling us the meet 
>>> the requirements with less effort. 
>>> 
>>> Pharo turned out to be a perfect fit to develop the application in a agile 
>>> way. Small development cycles with continuous integration and continuous 
>>> delivery enables fast turnarounds for the customers to validate progress.
>>> 
>>> This is a screenshot of the search page for multi-modal results:
>>> 
>>> 
>>> <Screen Shot 2018-09-21 at 16.54.30.png>
>> 
>> 
> 
> 


Reply via email to