Re: [Pharo-dev] [Pharo-users] [ANN] Success story Mobility Map

2018-10-01 Thread Nicolas Cellier
Hi Norbert,

Le lun. 1 oct. 2018 à 19:22, Norbert Hartl  a écrit :

>
>
> Am 26.09.2018 um 19:22 schrieb Nicolas Cellier <
> nicolas.cellier.aka.n...@gmail.com>:
>
> Very nice proof that we can leverage efficient and up to date technologies
> !
>
>
> That was part of the plan ;)
>
> Thank you so much for sharing, that's the right way to make Pharo (and
> Smalltalk) alive and kicking. How did the Pharo IDE help in such context?
> (did you use debugging facility extensively?).
> What tool is missing?
>
>
> At first pharo is always a bliss to work with. In such a project I try to
> make the whole application managable on different levels. The most
> important one is to have the whole application in one image. We have tests
> for the broker and for each micro service. But we also have tests that
> operate on the whole stack. Here the Q is being shortcut and handling is
> more synchronous than in the Q case. With this it is easier to get a stack
> in the debugger of the whole roundtrip.
>
> Using docker we can have the whole application started on a local laptop.
> This way all components are much closer to investigate. We can switch the
> frontend server with a local development entity. I started to do that for
> the pharo images but did not finish yet. The idea is to start the same
> stack as in the swarm but replace one image with an actual development
> image where you can use the debugger.
>
> On the swarm itself we configure each microservice to upload a fuel
> context dump to a central server on exception. From my development image I
> have a simple client to look for exceptions for a project and version
> number. Clicking on one it downloads the dump and opens a debugger locally.
> I can fix the bug and commit with iceberg. This goes well with our
> continuous deployment. When a commit is done, jenkins builds the whole
> product and deploys that automatically on the alpha swarm. This way from
> seeing an error the only thing to do is clicking on it solve the problem in
> the debugger and commit. Well, in most of the cases ;)
>
> What I’m working on and gave a quick preview on ESUg is to have a client
> for docker swarm. It is another way to close the cycle to use pharo to
> manage things in a swarm that is build from pharo images. I did a first
> prototype how to connect on a particular image in the swarm and start
> TelePharo on it so you can set breakpoints for certain things and have a
> debugger in your image from the live swarm.
>
> The last things we did is to add proper monitoring of metrics withing the
> service so you can spot problems that belong to any kind of resource
> shortage. In this case it would be especially useful to connect to such an
> image to do live investigations. Yes and object centric debugging/logging
> will help here Steven/guys ;)
>
> Or two say it in less words :) The two problems we had to solve is to
> remove complexity where possible and to have all the mentioned approaches
> to enable our team to tackle an occurring problem from different angles. If
> no one is blocked in work the project does not stagnate. It is not a
> guarantee to succeed but a requirement
>
> Hope this is the information you were asking for.
>
> Norbert
>

Yes, thanks for these details, that's exactly the kind of testimony i'm
after.
I wish i were able to demonstrate the advantages by myself to some teams
not working with Pharo, but i belong to dinosaures era of desktop apps, i
can preach, but not easily practice.
The team has chosen to work in heterogeneous languages/environments so the
idea of having services integrated in single image while very interesting,
won't directly translate.
I don't know if Pharo will shine in this context for developing a single
service as a POC, and i guess that remote debugging will indeed be a must
in such more adverse ( less integrated) environment.
I'll take time to reread your detailed answer and also transmit the
pointers to the team.
Thanks again.

>
>
> Le mar. 25 sept. 2018 à 17:45, Sven Van Caekenberghe  a
> écrit :
>
>>
>>
>> > On 25 Sep 2018, at 14:39, Norbert Hartl  wrote:
>> >
>> >
>> >
>> >> Am 25.09.2018 um 12:52 schrieb Sven Van Caekenberghe :
>> >>
>> >> Wow. Very nice, well done.
>> >>
>> >> Any chance on some more technical details, as in what 'connected by a
>> message queue for the communication' exactly means ? How did you approach
>> micro services exactly ?
>> >>
>> > Sure :)
>>
>> Thanks, this is very interesting !
>>
>> > The installation spawns multiple physical machines. All the machines
>> are joined to a docker swarm. The installation is reified as either task or
>> service from the view on the docker swarm. Meaning you instantiate an
>> arbitrary amount of services and docker swarm distributes them among the
>> physical machines. Usually you don’t take control which is running where
>> but you can. At this point you have spread dozens of pharo images among
>> multiple machines and each of them has an IP address. Furthermore in docker

Re: [Pharo-dev] [Pharo-users] [ANN] Success story Mobility Map

2018-10-01 Thread Norbert Hartl



> Am 30.09.2018 um 13:01 schrieb Pierce Ng :
> 
> On Wed, Sep 26, 2018 at 07:49:10PM +0200, Norbert Hartl wrote:
 And a lot more. This is a coarse grained overview over the
 architecture. I’m happy to answer further questions about this.
 [very nice writeup]
> 
> Hi Norbert,
> 
> Very nice write-up, thanks. 
> 
thanks

> What persistence mechanism are you using - Gemstone/S, Glorp, Voyage, …?

We use voyage with a mongo replica set. 

Norbert






Re: [Pharo-dev] [Pharo-users] [ANN] Success story Mobility Map

2018-10-01 Thread Norbert Hartl


> Am 26.09.2018 um 19:22 schrieb Nicolas Cellier 
> :
> 
> Very nice proof that we can leverage efficient and up to date technologies !

That was part of the plan ;)

> Thank you so much for sharing, that's the right way to make Pharo (and 
> Smalltalk) alive and kicking. How did the Pharo IDE help in such context? 
> (did you use debugging facility extensively?).
> What tool is missing?

At first pharo is always a bliss to work with. In such a project I try to make 
the whole application managable on different levels. The most important one is 
to have the whole application in one image. We have tests for the broker and 
for each micro service. But we also have tests that operate on the whole stack. 
Here the Q is being shortcut and handling is more synchronous than in the Q 
case. With this it is easier to get a stack in the debugger of the whole 
roundtrip. 

Using docker we can have the whole application started on a local laptop. This 
way all components are much closer to investigate. We can switch the frontend 
server with a local development entity. I started to do that for the pharo 
images but did not finish yet. The idea is to start the same stack as in the 
swarm but replace one image with an actual development image where you can use 
the debugger.

On the swarm itself we configure each microservice to upload a fuel context 
dump to a central server on exception. From my development image I have a 
simple client to look for exceptions for a project and version number. Clicking 
on one it downloads the dump and opens a debugger locally. I can fix the bug 
and commit with iceberg. This goes well with our continuous deployment. When a 
commit is done, jenkins builds the whole product and deploys that automatically 
on the alpha swarm. This way from seeing an error the only thing to do is 
clicking on it solve the problem in the debugger and commit. Well, in most of 
the cases ;)

What I’m working on and gave a quick preview on ESUg is to have a client for 
docker swarm. It is another way to close the cycle to use pharo to manage 
things in a swarm that is build from pharo images. I did a first prototype how 
to connect on a particular image in the swarm and start TelePharo on it so you 
can set breakpoints for certain things and have a debugger in your image from 
the live swarm. 

The last things we did is to add proper monitoring of metrics withing the 
service so you can spot problems that belong to any kind of resource shortage. 
In this case it would be especially useful to connect to such an image to do 
live investigations. Yes and object centric debugging/logging will help here 
Steven/guys ;)

Or two say it in less words :) The two problems we had to solve is to remove 
complexity where possible and to have all the mentioned approaches to enable 
our team to tackle an occurring problem from different angles. If no one is 
blocked in work the project does not stagnate. It is not a guarantee to succeed 
but a requirement

Hope this is the information you were asking for.

Norbert

> 
> Le mar. 25 sept. 2018 à 17:45, Sven Van Caekenberghe  > a écrit :
> 
> 
> > On 25 Sep 2018, at 14:39, Norbert Hartl  > > wrote:
> > 
> > 
> > 
> >> Am 25.09.2018 um 12:52 schrieb Sven Van Caekenberghe  >> >:
> >> 
> >> Wow. Very nice, well done.
> >> 
> >> Any chance on some more technical details, as in what 'connected by a 
> >> message queue for the communication' exactly means ? How did you approach 
> >> micro services exactly ?
> >> 
> > Sure :)
> 
> Thanks, this is very interesting !
> 
> > The installation spawns multiple physical machines. All the machines are 
> > joined to a docker swarm. The installation is reified as either task or 
> > service from the view on the docker swarm. Meaning you instantiate an 
> > arbitrary amount of services and docker swarm distributes them among the 
> > physical machines. Usually you don’t take control which is running where 
> > but you can. At this point you have spread dozens of pharo images among 
> > multiple machines and each of them has an IP address. Furthermore in docker 
> > swarm you have a reification of a network meaning that every instance in a 
> > network can see all other instances on this network. Each service can be 
> > reached by its service name in that network. Docker swarm does all the 
> > iptables/firewall and DNS setup for you.
> 
> Are you happy with docker swarm's availability/fail-over behaviour ? In other 
> words: does it work when one image/instance goes bad, does it detect and 
> restore the missing functionality ?
> 
> > In order to have communication between those runtimes we use rabbitmq 
> > because you were so nice writing a driver for it ;) The rabbitmq does have 
> > a support for cluster setup, meaning each of the physical machines has a 
> > rabbitmq installation and they know each other. So it does not matter to 
> > which instance you send 

Re: [Pharo-dev] [Pharo-users] [ANN] Success story Mobility Map

2018-09-30 Thread Pierce Ng
On Wed, Sep 26, 2018 at 07:49:10PM +0200, Norbert Hartl wrote:
>>> And a lot more. This is a coarse grained overview over the
>>> architecture. I’m happy to answer further questions about this.
>>> [very nice writeup]

Hi Norbert,

Very nice write-up, thanks. 

What persistence mechanism are you using - Gemstone/S, Glorp, Voyage, ...?

Pierce




Re: [Pharo-dev] [Pharo-users] [ANN] Success story Mobility Map

2018-09-26 Thread Norbert Hartl



> Am 25.09.2018 um 17:44 schrieb Sven Van Caekenberghe :
> 
> 
> 
>> On 25 Sep 2018, at 14:39, Norbert Hartl  wrote:
>> 
>> 
>> 
>>> Am 25.09.2018 um 12:52 schrieb Sven Van Caekenberghe :
>>> 
>>> Wow. Very nice, well done.
>>> 
>>> Any chance on some more technical details, as in what 'connected by a 
>>> message queue for the communication' exactly means ? How did you approach 
>>> micro services exactly ?
>>> 
>> Sure :)
> 
> Thanks, this is very interesting !
> 
>> The installation spawns multiple physical machines. All the machines are 
>> joined to a docker swarm. The installation is reified as either task or 
>> service from the view on the docker swarm. Meaning you instantiate an 
>> arbitrary amount of services and docker swarm distributes them among the 
>> physical machines. Usually you don’t take control which is running where but 
>> you can. At this point you have spread dozens of pharo images among multiple 
>> machines and each of them has an IP address. Furthermore in docker swarm you 
>> have a reification of a network meaning that every instance in a network can 
>> see all other instances on this network. Each service can be reached by its 
>> service name in that network. Docker swarm does all the iptables/firewall 
>> and DNS setup for you.
> 
> Are you happy with docker swarm's availability/fail-over behaviour ? In other 
> words: does it work when one image/instance goes bad, does it detect and 
> restore the missing functionality ?
> 
Yes, I‘m very satisified. Instances are automatically restarted if they crash. 
And not necessarily on the same machine exactly how I expect it. Docker has 
something called Healthcheck. You can have a command executed every 20 seconds. 
I hooked this to a curl command and to your SUnit Rest handler. The rest is 
writing unit tests for server health. If tests fail in sequence the instances 
are taken out of operation and are replaced with a new instance. The same is 
done for updating. Instances are started in a fashion that one instance of the 
new image is started, if it survives a couple of health checks it is taken 
operational and an old one is taken out. Then the next new is started... For 
simple software updates you have zero-downtime deployment. 

>> In order to have communication between those runtimes we use rabbitmq 
>> because you were so nice writing a driver for it ;) The rabbitmq does have a 
>> support for cluster setup, meaning each of the physical machines has a 
>> rabbitmq installation and they know each other. So it does not matter to 
>> which instance you send messages to and on which you register for receiving 
>> messages. So every pharo image connects to the service rabbitmq and opens a 
>> queue for interaction.
> 
> Same question: does RabbitMQ's clustering work well under stress/problems ? 
> Syncing all queues between all machines sounds quite heavy (I never tried it, 
> but maybe it just works).

I did not yet have time to real stress test the queue. And you are right 
copying between nodes might be a lot but still I have the feeling it is better 
to have then a single instance but no  real reason. We also use huge payloads 
which might have to change if we encounter problems.
> 
>> Each service like the car sharing opens a queue e.g. /queue/carSharing and 
>> listens on it. The broker images are stateful so they open queues like 
>> /queue/mobility-map-afdeg32 where afdeg32 is the container id of the 
>> instance (hostname in docker). In each request the queue name to reply is 
>> sent as a header. So we can make sure that the right image gets the message 
>> back. This way we can have sticky sessions keeping volatile data in memory 
>> for the lifecycle of a session. There is one worker image which opens a 
>> queue /queue/mobility-map where session independent requests can be 
>> processed. 
> 
> I think I understand ;-)
> 
>> In order to ease development we are sharing code between the broker and the 
>> micro service. Each micro service has a -Common package where the classes 
>> are in that build the interface. The classes in here are a kind of data 
>> entity facades. They use NeoJSON to map to and from a stream. The class name 
>> is send with the message as a header so the remote side knows what to 
>> materialize. The handling is unified for the four cases 
>> 
>> - Request as inquiry to another micro service
>> - Response returns values to a Request
>> - Error is transferred like a Response but is then signalled on the 
>> receiving side
>> - Notification connects the announcers on the broker and the micro service 
>> side.
> 
> Yes, makes total sense.
> 
>> Asynchronous calls we solved using Promises and Futures. Each async call to 
>> the Q becomes a promise (that blocks on #value) and is combined to a future 
>> value containing all promises with support to generate a delta of all 
>> resolved promises. This we need because you issue a search that takes longer 
>> and you want to display results as 

Re: [Pharo-dev] [Pharo-users] [ANN] Success story Mobility Map

2018-09-26 Thread Nicolas Cellier
Very nice proof that we can leverage efficient and up to date technologies !
Thank you so much for sharing, that's the right way to make Pharo (and
Smalltalk) alive and kicking. How did the Pharo IDE help in such context?
(did you use debugging facility extensively?).
What tool is missing?

Le mar. 25 sept. 2018 à 17:45, Sven Van Caekenberghe  a
écrit :

>
>
> > On 25 Sep 2018, at 14:39, Norbert Hartl  wrote:
> >
> >
> >
> >> Am 25.09.2018 um 12:52 schrieb Sven Van Caekenberghe :
> >>
> >> Wow. Very nice, well done.
> >>
> >> Any chance on some more technical details, as in what 'connected by a
> message queue for the communication' exactly means ? How did you approach
> micro services exactly ?
> >>
> > Sure :)
>
> Thanks, this is very interesting !
>
> > The installation spawns multiple physical machines. All the machines are
> joined to a docker swarm. The installation is reified as either task or
> service from the view on the docker swarm. Meaning you instantiate an
> arbitrary amount of services and docker swarm distributes them among the
> physical machines. Usually you don’t take control which is running where
> but you can. At this point you have spread dozens of pharo images among
> multiple machines and each of them has an IP address. Furthermore in docker
> swarm you have a reification of a network meaning that every instance in a
> network can see all other instances on this network. Each service can be
> reached by its service name in that network. Docker swarm does all the
> iptables/firewall and DNS setup for you.
>
> Are you happy with docker swarm's availability/fail-over behaviour ? In
> other words: does it work when one image/instance goes bad, does it detect
> and restore the missing functionality ?
>
> > In order to have communication between those runtimes we use rabbitmq
> because you were so nice writing a driver for it ;) The rabbitmq does have
> a support for cluster setup, meaning each of the physical machines has a
> rabbitmq installation and they know each other. So it does not matter to
> which instance you send messages to and on which you register for receiving
> messages. So every pharo image connects to the service rabbitmq and opens a
> queue for interaction.
>
> Same question: does RabbitMQ's clustering work well under stress/problems
> ? Syncing all queues between all machines sounds quite heavy (I never tried
> it, but maybe it just works).
>
> > Each service like the car sharing opens a queue e.g. /queue/carSharing
> and listens on it. The broker images are stateful so they open queues like
> /queue/mobility-map-afdeg32 where afdeg32 is the container id of the
> instance (hostname in docker). In each request the queue name to reply is
> sent as a header. So we can make sure that the right image gets the message
> back. This way we can have sticky sessions keeping volatile data in memory
> for the lifecycle of a session. There is one worker image which opens a
> queue /queue/mobility-map where session independent requests can be
> processed.
>
> I think I understand ;-)
>
> > In order to ease development we are sharing code between the broker and
> the micro service. Each micro service has a -Common package where the
> classes are in that build the interface. The classes in here are a kind of
> data entity facades. They use NeoJSON to map to and from a stream. The
> class name is send with the message as a header so the remote side knows
> what to materialize. The handling is unified for the four cases
> >
> > - Request as inquiry to another micro service
> > - Response returns values to a Request
> > - Error is transferred like a Response but is then signalled on the
> receiving side
> > - Notification connects the announcers on the broker and the micro
> service side.
>
> Yes, makes total sense.
>
> > Asynchronous calls we solved using Promises and Futures. Each async call
> to the Q becomes a promise (that blocks on #value) and is combined to a
> future value containing all promises with support to generate a delta of
> all resolved promises. This we need because you issue a search that takes
> longer and you want to display results as soon as they are resolved not
> after all haven been resolved.
>
> Which Promise/Future framework/library are you using in Pharo ?
>
> You did not go for single threaded worker images ?
>
> > And a lot more. This is a coarse grained overview over the architecture.
> I’m happy to answer further questions about this.
> >
> > Norbert
> >
> >>> On 25 Sep 2018, at 12:20, Norbert Hartl  wrote:
> >>>
> >>> As presented on ESUG here is the brief description of one of our
> current projects.
> >>>
> >>> Mobility Map
> >>> ——
> >>>
> >>> Mobility Map is a broker for mobility services. It offers multi-modal
> routing search enabling users to find the best travel options between
> locations. Travel options include car sharing, bikes, trains, busses etc.
> Rented cars can be offered for ride sharing on booking time letting other
> people 

Re: [Pharo-dev] [Pharo-users] [ANN] Success story Mobility Map

2018-09-25 Thread Sven Van Caekenberghe



> On 25 Sep 2018, at 14:39, Norbert Hartl  wrote:
> 
> 
> 
>> Am 25.09.2018 um 12:52 schrieb Sven Van Caekenberghe :
>> 
>> Wow. Very nice, well done.
>> 
>> Any chance on some more technical details, as in what 'connected by a 
>> message queue for the communication' exactly means ? How did you approach 
>> micro services exactly ?
>> 
> Sure :)

Thanks, this is very interesting !

> The installation spawns multiple physical machines. All the machines are 
> joined to a docker swarm. The installation is reified as either task or 
> service from the view on the docker swarm. Meaning you instantiate an 
> arbitrary amount of services and docker swarm distributes them among the 
> physical machines. Usually you don’t take control which is running where but 
> you can. At this point you have spread dozens of pharo images among multiple 
> machines and each of them has an IP address. Furthermore in docker swarm you 
> have a reification of a network meaning that every instance in a network can 
> see all other instances on this network. Each service can be reached by its 
> service name in that network. Docker swarm does all the iptables/firewall and 
> DNS setup for you.

Are you happy with docker swarm's availability/fail-over behaviour ? In other 
words: does it work when one image/instance goes bad, does it detect and 
restore the missing functionality ?

> In order to have communication between those runtimes we use rabbitmq because 
> you were so nice writing a driver for it ;) The rabbitmq does have a support 
> for cluster setup, meaning each of the physical machines has a rabbitmq 
> installation and they know each other. So it does not matter to which 
> instance you send messages to and on which you register for receiving 
> messages. So every pharo image connects to the service rabbitmq and opens a 
> queue for interaction.

Same question: does RabbitMQ's clustering work well under stress/problems ? 
Syncing all queues between all machines sounds quite heavy (I never tried it, 
but maybe it just works).

> Each service like the car sharing opens a queue e.g. /queue/carSharing and 
> listens on it. The broker images are stateful so they open queues like 
> /queue/mobility-map-afdeg32 where afdeg32 is the container id of the instance 
> (hostname in docker). In each request the queue name to reply is sent as a 
> header. So we can make sure that the right image gets the message back. This 
> way we can have sticky sessions keeping volatile data in memory for the 
> lifecycle of a session. There is one worker image which opens a queue 
> /queue/mobility-map where session independent requests can be processed. 

I think I understand ;-)

> In order to ease development we are sharing code between the broker and the 
> micro service. Each micro service has a -Common package where the classes are 
> in that build the interface. The classes in here are a kind of data entity 
> facades. They use NeoJSON to map to and from a stream. The class name is send 
> with the message as a header so the remote side knows what to materialize. 
> The handling is unified for the four cases 
> 
> - Request as inquiry to another micro service
> - Response returns values to a Request
> - Error is transferred like a Response but is then signalled on the receiving 
> side
> - Notification connects the announcers on the broker and the micro service 
> side.

Yes, makes total sense.

> Asynchronous calls we solved using Promises and Futures. Each async call to 
> the Q becomes a promise (that blocks on #value) and is combined to a future 
> value containing all promises with support to generate a delta of all 
> resolved promises. This we need because you issue a search that takes longer 
> and you want to display results as soon as they are resolved not after all 
> haven been resolved.

Which Promise/Future framework/library are you using in Pharo ?

You did not go for single threaded worker images ?

> And a lot more. This is a coarse grained overview over the architecture. I’m 
> happy to answer further questions about this.
> 
> Norbert
> 
>>> On 25 Sep 2018, at 12:20, Norbert Hartl  wrote:
>>> 
>>> As presented on ESUG here is the brief description of one of our current 
>>> projects. 
>>> 
>>> Mobility Map
>>> ——
>>> 
>>> Mobility Map is a broker for mobility services. It offers multi-modal 
>>> routing search enabling users to find the best travel options between 
>>> locations. Travel options include car sharing, bikes, trains, busses etc. 
>>> Rented cars can be offered for ride sharing on booking time letting other 
>>> people find it to participate in the ride. Single travel options are 
>>> combined in travel plans that can be booked and managed in a very easy way. 
>>> 
>>> For this project main requirements were scalability to serve a large user 
>>> base and flexibility to add more additional providers to the broker. The 
>>> application has been realized using web technologies for the frontend and 
>>>