Re: [Pharo-dev] [Pharo-users] [ANN] Success story Mobility Map

2018-09-26 Thread Norbert Hartl



> Am 25.09.2018 um 17:44 schrieb Sven Van Caekenberghe :
> 
> 
> 
>> On 25 Sep 2018, at 14:39, Norbert Hartl  wrote:
>> 
>> 
>> 
>>> Am 25.09.2018 um 12:52 schrieb Sven Van Caekenberghe :
>>> 
>>> Wow. Very nice, well done.
>>> 
>>> Any chance on some more technical details, as in what 'connected by a 
>>> message queue for the communication' exactly means ? How did you approach 
>>> micro services exactly ?
>>> 
>> Sure :)
> 
> Thanks, this is very interesting !
> 
>> The installation spawns multiple physical machines. All the machines are 
>> joined to a docker swarm. The installation is reified as either task or 
>> service from the view on the docker swarm. Meaning you instantiate an 
>> arbitrary amount of services and docker swarm distributes them among the 
>> physical machines. Usually you don’t take control which is running where but 
>> you can. At this point you have spread dozens of pharo images among multiple 
>> machines and each of them has an IP address. Furthermore in docker swarm you 
>> have a reification of a network meaning that every instance in a network can 
>> see all other instances on this network. Each service can be reached by its 
>> service name in that network. Docker swarm does all the iptables/firewall 
>> and DNS setup for you.
> 
> Are you happy with docker swarm's availability/fail-over behaviour ? In other 
> words: does it work when one image/instance goes bad, does it detect and 
> restore the missing functionality ?
> 
Yes, I‘m very satisified. Instances are automatically restarted if they crash. 
And not necessarily on the same machine exactly how I expect it. Docker has 
something called Healthcheck. You can have a command executed every 20 seconds. 
I hooked this to a curl command and to your SUnit Rest handler. The rest is 
writing unit tests for server health. If tests fail in sequence the instances 
are taken out of operation and are replaced with a new instance. The same is 
done for updating. Instances are started in a fashion that one instance of the 
new image is started, if it survives a couple of health checks it is taken 
operational and an old one is taken out. Then the next new is started... For 
simple software updates you have zero-downtime deployment. 

>> In order to have communication between those runtimes we use rabbitmq 
>> because you were so nice writing a driver for it ;) The rabbitmq does have a 
>> support for cluster setup, meaning each of the physical machines has a 
>> rabbitmq installation and they know each other. So it does not matter to 
>> which instance you send messages to and on which you register for receiving 
>> messages. So every pharo image connects to the service rabbitmq and opens a 
>> queue for interaction.
> 
> Same question: does RabbitMQ's clustering work well under stress/problems ? 
> Syncing all queues between all machines sounds quite heavy (I never tried it, 
> but maybe it just works).

I did not yet have time to real stress test the queue. And you are right 
copying between nodes might be a lot but still I have the feeling it is better 
to have then a single instance but no  real reason. We also use huge payloads 
which might have to change if we encounter problems.
> 
>> Each service like the car sharing opens a queue e.g. /queue/carSharing and 
>> listens on it. The broker images are stateful so they open queues like 
>> /queue/mobility-map-afdeg32 where afdeg32 is the container id of the 
>> instance (hostname in docker). In each request the queue name to reply is 
>> sent as a header. So we can make sure that the right image gets the message 
>> back. This way we can have sticky sessions keeping volatile data in memory 
>> for the lifecycle of a session. There is one worker image which opens a 
>> queue /queue/mobility-map where session independent requests can be 
>> processed. 
> 
> I think I understand ;-)
> 
>> In order to ease development we are sharing code between the broker and the 
>> micro service. Each micro service has a -Common package where the classes 
>> are in that build the interface. The classes in here are a kind of data 
>> entity facades. They use NeoJSON to map to and from a stream. The class name 
>> is send with the message as a header so the remote side knows what to 
>> materialize. The handling is unified for the four cases 
>> 
>> - Request as inquiry to another micro service
>> - Response returns values to a Request
>> - Error is transferred like a Response but is then signalled on the 
>> receiving side
>> - Notification connects the announcers on the broker and the micro service 
>> side.
> 
> Yes, makes total sense.
> 
>> Asynchronous calls we solved using Promises and Futures. Each async call to 
>> the Q becomes a promise (that blocks on #value) and is combined to a future 
>> value containing all promises with support to generate a delta of all 
>> resolved promises. This we need because you issue a search that takes longer 
>> and you want to display results as 

Re: [Pharo-dev] [Pharo-users] [ANN] Success story Mobility Map

2018-09-26 Thread Nicolas Cellier
Very nice proof that we can leverage efficient and up to date technologies !
Thank you so much for sharing, that's the right way to make Pharo (and
Smalltalk) alive and kicking. How did the Pharo IDE help in such context?
(did you use debugging facility extensively?).
What tool is missing?

Le mar. 25 sept. 2018 à 17:45, Sven Van Caekenberghe  a
écrit :

>
>
> > On 25 Sep 2018, at 14:39, Norbert Hartl  wrote:
> >
> >
> >
> >> Am 25.09.2018 um 12:52 schrieb Sven Van Caekenberghe :
> >>
> >> Wow. Very nice, well done.
> >>
> >> Any chance on some more technical details, as in what 'connected by a
> message queue for the communication' exactly means ? How did you approach
> micro services exactly ?
> >>
> > Sure :)
>
> Thanks, this is very interesting !
>
> > The installation spawns multiple physical machines. All the machines are
> joined to a docker swarm. The installation is reified as either task or
> service from the view on the docker swarm. Meaning you instantiate an
> arbitrary amount of services and docker swarm distributes them among the
> physical machines. Usually you don’t take control which is running where
> but you can. At this point you have spread dozens of pharo images among
> multiple machines and each of them has an IP address. Furthermore in docker
> swarm you have a reification of a network meaning that every instance in a
> network can see all other instances on this network. Each service can be
> reached by its service name in that network. Docker swarm does all the
> iptables/firewall and DNS setup for you.
>
> Are you happy with docker swarm's availability/fail-over behaviour ? In
> other words: does it work when one image/instance goes bad, does it detect
> and restore the missing functionality ?
>
> > In order to have communication between those runtimes we use rabbitmq
> because you were so nice writing a driver for it ;) The rabbitmq does have
> a support for cluster setup, meaning each of the physical machines has a
> rabbitmq installation and they know each other. So it does not matter to
> which instance you send messages to and on which you register for receiving
> messages. So every pharo image connects to the service rabbitmq and opens a
> queue for interaction.
>
> Same question: does RabbitMQ's clustering work well under stress/problems
> ? Syncing all queues between all machines sounds quite heavy (I never tried
> it, but maybe it just works).
>
> > Each service like the car sharing opens a queue e.g. /queue/carSharing
> and listens on it. The broker images are stateful so they open queues like
> /queue/mobility-map-afdeg32 where afdeg32 is the container id of the
> instance (hostname in docker). In each request the queue name to reply is
> sent as a header. So we can make sure that the right image gets the message
> back. This way we can have sticky sessions keeping volatile data in memory
> for the lifecycle of a session. There is one worker image which opens a
> queue /queue/mobility-map where session independent requests can be
> processed.
>
> I think I understand ;-)
>
> > In order to ease development we are sharing code between the broker and
> the micro service. Each micro service has a -Common package where the
> classes are in that build the interface. The classes in here are a kind of
> data entity facades. They use NeoJSON to map to and from a stream. The
> class name is send with the message as a header so the remote side knows
> what to materialize. The handling is unified for the four cases
> >
> > - Request as inquiry to another micro service
> > - Response returns values to a Request
> > - Error is transferred like a Response but is then signalled on the
> receiving side
> > - Notification connects the announcers on the broker and the micro
> service side.
>
> Yes, makes total sense.
>
> > Asynchronous calls we solved using Promises and Futures. Each async call
> to the Q becomes a promise (that blocks on #value) and is combined to a
> future value containing all promises with support to generate a delta of
> all resolved promises. This we need because you issue a search that takes
> longer and you want to display results as soon as they are resolved not
> after all haven been resolved.
>
> Which Promise/Future framework/library are you using in Pharo ?
>
> You did not go for single threaded worker images ?
>
> > And a lot more. This is a coarse grained overview over the architecture.
> I’m happy to answer further questions about this.
> >
> > Norbert
> >
> >>> On 25 Sep 2018, at 12:20, Norbert Hartl  wrote:
> >>>
> >>> As presented on ESUG here is the brief description of one of our
> current projects.
> >>>
> >>> Mobility Map
> >>> ——
> >>>
> >>> Mobility Map is a broker for mobility services. It offers multi-modal
> routing search enabling users to find the best travel options between
> locations. Travel options include car sharing, bikes, trains, busses etc.
> Rented cars can be offered for ride sharing on booking time letting other
> people 

Re: [Pharo-dev] [Pharo 7.0-dev] Build #1271: 22483-CollectiongroupedBy---preserve-key-order-in-final-grouped-dictionary

2018-09-26 Thread Esteban Lorenzano


> On 26 Sep 2018, at 15:24, Petr Fischer via Pharo-dev 
>  wrote:
> 
> 
> From: Petr Fischer 
> Subject: Re: [Pharo-dev] [Pharo 7.0-dev] Build #1271: 
> 22483-CollectiongroupedBy---preserve-key-order-in-final-grouped-dictionary
> Date: 26 September 2018 at 15:24:49 CEST
> To: Pharo Development List 
> 
> 
> Hello, why is test status of this pull request: FAILURE?
> 
> Thanks for clarification, Petr Fischer

Is a misleading message. 
It means merge was done, but for some reason (usually network problems), some 
tests failed to run.

We are working on fix that (but you know… time :P).

Esteban

> 
> 
>> There is a new Pharo build available!
>> 
>> The status of the build #1271 was: FAILURE.
>> 
>> The Pull Request #1823 was integrated: 
>> "22483-CollectiongroupedBy---preserve-key-order-in-final-grouped-dictionary"
>> Pull request url: https://github.com/pharo-project/pharo/pull/1823
>> 
>> Issue Url: https://pharo.fogbugz.com/f/cases/22483
>> Build Url: 
>> https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/1271/
> 
> 
> 
> 



Re: [Pharo-dev] [Pharo 7.0-dev] Build #1271: 22483-CollectiongroupedBy---preserve-key-order-in-final-grouped-dictionary

2018-09-26 Thread Marcus Denker
Hi,

It means that the build for this one merge failed… build got much more stable 
over the last weeks, but we still sometimes have crashes
(here it seems to be the text render bug)

But this is undefended of the merge: the next build then will take this into 
account.

> On 26 Sep 2018, at 15:24, Petr Fischer via Pharo-dev 
>  wrote:
> 
> 
> From: Petr Fischer 
> Subject: Re: [Pharo-dev] [Pharo 7.0-dev] Build #1271: 
> 22483-CollectiongroupedBy---preserve-key-order-in-final-grouped-dictionary
> Date: 26 September 2018 at 15:24:49 CEST
> To: Pharo Development List 
> 
> 
> Hello, why is test status of this pull request: FAILURE?
> 
> Thanks for clarification, Petr Fischer
> 
> 
>> There is a new Pharo build available!
>> 
>> The status of the build #1271 was: FAILURE.
>> 
>> The Pull Request #1823 was integrated: 
>> "22483-CollectiongroupedBy---preserve-key-order-in-final-grouped-dictionary"
>> Pull request url: https://github.com/pharo-project/pharo/pull/1823
>> 
>> Issue Url: https://pharo.fogbugz.com/f/cases/22483
>> Build Url: 
>> https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/1271/
> 
> 
> 
> 



Re: [Pharo-dev] [Pharo 7.0-dev] Build #1271: 22483-CollectiongroupedBy---preserve-key-order-in-final-grouped-dictionary

2018-09-26 Thread Petr Fischer via Pharo-dev
--- Begin Message ---
Hello, why is test status of this pull request: FAILURE?

Thanks for clarification, Petr Fischer


> There is a new Pharo build available!
>   
> The status of the build #1271 was: FAILURE.
> 
> The Pull Request #1823 was integrated: 
> "22483-CollectiongroupedBy---preserve-key-order-in-final-grouped-dictionary"
> Pull request url: https://github.com/pharo-project/pharo/pull/1823
> 
> Issue Url: https://pharo.fogbugz.com/f/cases/22483
> Build Url: 
> https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/1271/


--- End Message ---


[Pharo-dev] [Pharo 7.0-dev] Build #1271: 22483-CollectiongroupedBy---preserve-key-order-in-final-grouped-dictionary

2018-09-26 Thread ci-pharo-ci-jenkins2
There is a new Pharo build available!
  
The status of the build #1271 was: FAILURE.

The Pull Request #1823 was integrated: 
"22483-CollectiongroupedBy---preserve-key-order-in-final-grouped-dictionary"
Pull request url: https://github.com/pharo-project/pharo/pull/1823

Issue Url: https://pharo.fogbugz.com/f/cases/22483
Build Url: 
https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/1271/


[Pharo-dev] [Pharo 7.0-dev] Build #1270: 22221-Jump-to-next-keyword

2018-09-26 Thread ci-pharo-ci-jenkins2
There is a new Pharo build available!
  
The status of the build #1270 was: SUCCESS.

The Pull Request #1722 was integrated: "1-Jump-to-next-keyword"
Pull request url: https://github.com/pharo-project/pharo/pull/1722

Issue Url: https://pharo.fogbugz.com/f/cases/jump
Build Url: 
https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/1270/


[Pharo-dev] [Pharo 7.0-dev] Build #1269: 22481-NetNameResolver-classlocalHostAddress---error-check-with-default

2018-09-26 Thread ci-pharo-ci-jenkins2
There is a new Pharo build available!
  
The status of the build #1269 was: FAILURE.

The Pull Request #1822 was integrated: 
"22481-NetNameResolver-classlocalHostAddress---error-check-with-default"
Pull request url: https://github.com/pharo-project/pharo/pull/1822

Issue Url: https://pharo.fogbugz.com/f/cases/22481
Build Url: 
https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/1269/


[Pharo-dev] IoT hackathon @zweidenker

2018-09-26 Thread Norbert Hartl
Hi,

we (ZWEIDENKER) and RModD organized an IoT Hackathon in cologne at 19th of 
october 2018. The plan will be:

- Allex Oliviera will present the tutorial he made on PharoThings [1]. The 
audience will try it in a hands-on-session and give feedback to improve it. 
Zweidenker is sponsoring the hardware so nobody has to bring anything to the 
session
- We are collecting ideas at the moment what we want to do with the learnings 
from Allex, meaning what hardware to build 
- If you have ideas about what to build with real life things that should be 
connected from pharo, please tell us. We will compile a list of ideas and for 
the first two or three we will buy the necessary hardware so we can build it at 
that day

The Hackathon will take place at

ZWEIDENKER GmbH
Luxemburger Str. 72
50674 Köln
germany

As we have a limited amount of seats you need to register if you are 
interested. Go to http://zweidenker.de/en/iot-hackathon-2018 
 to register your seat on the 
session. If there are more participants than seats we will have to do selection 
by dice rolling. Registration period ends on 5th of october so you will know in 
advance if you can come.

See you there,

Norbert on behalf of ZWEIDENKER

[1] https://github.com/pharo-iot/PharoThings 


[Pharo-dev] [Pharo 7.0-dev] Build #1268: 22488-Bootstrapping-from-outside-of-the-repository-broken

2018-09-26 Thread ci-pharo-ci-jenkins2
There is a new Pharo build available!
  
The status of the build #1268 was: SUCCESS.

The Pull Request #1826 was integrated: 
"22488-Bootstrapping-from-outside-of-the-repository-broken"
Pull request url: https://github.com/pharo-project/pharo/pull/1826

Issue Url: https://pharo.fogbugz.com/f/cases/22488
Build Url: 
https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/1268/


[Pharo-dev] [Pharo 7.0-dev] Build #1267: 22478 button presenter does not take into account its color

2018-09-26 Thread ci-pharo-ci-jenkins2
There is a new Pharo build available!
  
The status of the build #1267 was: SUCCESS.

The Pull Request #1820 was integrated: "22478 button presenter does not take 
into account its color"
Pull request url: https://github.com/pharo-project/pharo/pull/1820

Issue Url: https://pharo.fogbugz.com/f/cases/22478
Build Url: 
https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/1267/


[Pharo-dev] [Pharo 7.0-dev] Build #1266: 22440-Compiler-parseExpression-does-not-set-up-compilation-context-for-method-node

2018-09-26 Thread ci-pharo-ci-jenkins2
There is a new Pharo build available!
  
The status of the build #1266 was: SUCCESS.

The Pull Request #1825 was integrated: 
"22440-Compiler-parseExpression-does-not-set-up-compilation-context-for-method-node"
Pull request url: https://github.com/pharo-project/pharo/pull/1825

Issue Url: https://pharo.fogbugz.com/f/cases/22440
Build Url: 
https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/1266/


Re: [Pharo-dev] iceberg PrimitiveFailed allocateExecutablePage - PharoDebug.log

2018-09-26 Thread Petr Fischer via Pharo-dev
--- Begin Message ---
> Hi Petr,
> 
> along with Esteban’s request for the error code from 
> allocateExecutablePage can you also see whether use of iceberg is successful 
> the second time you launch Pharo?  So start up your virtualbox, try and 
> interact with iceberg, quit if it fails, relaunch and try again?

Yes, after the tests were green (second time, after magic happens), Iceberg was 
also OK - and I mean the whole Iceberg cycle up to creating the Pharo pull 
request right from the Iceberg UI/image.
I recreated whole virtualbox image from scratch and tune some vbox settings - 
described problems have gone and I am not able to recreate the same situation - 
let's forget it, it's not the Pharo thing.

And finally - I see that a lot of work has been done on Iceberg - it's not even 
necessary to leave Pharo for a minute due to the final Github pull request in 
web browser, nice.

pf

> Also in your steps what do you do to prepare?  eg do you boot in virtualbox, 
> return from sleep, or...?
> 
> FYI, allocateExecutablePage uses valloc (IIRC) to get a page from the OS and 
> then uses mprotect to add executable permission to the page before answering 
> the page’s address as the result of the primitive.  The callback machinery 
> then uses the page to provide the executable blue code used in implementing 
> callbacks.  The address of a code sequence in the page is what is actually 
> handed out to C code as a fu croon pointer.  When external code calls this 
> function pointer the code in the sequence invokes a callback into the vm 
> before returning back to C.  Consequently it is key that 
> allocateExecutablePage works correctly.  If it doesn’t then no callbacks.
> 
> _,,,^..^,,,_ (phone)
> 
> > On Sep 24, 2018, at 11:11 AM, Petr Fischer via Pharo-dev 
> >  wrote:
> > 
> > 

--- End Message ---