On Mon, Aug 14, 2017 at 4:42 PM, Tim Mackinnon <tim@testit.works> wrote:

> Hi Guille - just running SpaceTally on my dev image to get a feel for it.
> It turns out that in the minimal images you’ve been creating, its not
> loaded (makes sense).
>

Yup, it's loaded afterwards.

All packages are loaded through metacello baselines. We should start
refactoring and making standalone projects, each one with a baseline for
himself, and his own dependencies described.

I was checking on your gitlab and I have probably no access: how are you
finally loading packages in the bootstrap image? Can you share that with us
in text? I'd like to improve that situation.


> I’m wondering if there is an easy way to import it in (I guess that
> package should be in the Pharo git tree I cloned to get Fuel loaded right?
> Or is there a separate standalone source?).
>

Yes it is, you can get the package programatically doing

SpaceTally package name

And furthermore, get the baseline that currently is loading by doing

package := SpaceTally package name.
BaselineOf subclasses select: [ :e |
e project version packages anySatisfy: [ :p | p name = package ]].


>
> Thanks for all the support, and your email about why the contexts stack up
> is very well received (I will comment over there).
>
> By the way - it looks like Martin Fowler picked up on this announcement -
> so maybe we might get some interest from his mass of followers.
>
> Tim
>
> On 14 Aug 2017, at 10:49, Guillermo Polito <guillermopol...@gmail.com>
> wrote:
>
> Hi Tim,
>
> On Mon, Aug 14, 2017 at 11:41 AM, Tim Mackinnon <tim@testit.works> wrote:
>
>> Hey guys, thanks for your enthusiasm around this - and I cannot stress
>> enough how this was only possible because of the work that has gone into
>> making Pharo (in particular the 64bit image, as well as having a minimal
>> image, and some great blog posts on serialising contexts) as well as the
>> patience from everyone in answering questions and helping me get it all
>> working.
>>
>> I’m still quite keen to get my execution time back down under 800ms and
>> I’d like to actually get back to writing a few skills to automate a few
>> things around my house.
>>
>> To Answer Denis’ question -
>>
>> My final footprint is 30.4mb - thats composed of a 22mb image (with a
>> simple example that pulls in Fuel, ZTimestamp and the S3 Library which
>> depends on XMLParser) and then the VM (from which I removed obvious dll’s).
>>
>> In my original experiments with a 6.0 minimal image - I did manage to get
>> to a 13.4mb image (which started out as 12mb original size, and then loaded
>> in STON and had only a simple clock example). I think the sweet spot is
>> around 20mb total footprint as that seems to get me into the 450ms-900ms
>> range.
>>
>> The 7.0 min image now starts out at 15mb and then I’m not sure why
>> loading Fuel, S3 and XMLParser takes 7mb (it seems big to me - but I’ve not
>> dug into that).
>>
>
> You can do further space analysis using the following expression
>
> SpaceTally  new printSpaceAnalysis
>
> You can do that in an eval and check what's taking space. With measures we
> can iterate and improve :).
>
>
>> I’ve also found (and this on the back of unserialising the context in my
>> example) that the way we build images has 15+ saved stack sessions that
>> have saved on top of each other from the way we build up the images. I
>> don’t yet know the implications of size/speed of these - but we need a
>> better way of folding executions when we snapshot headless images. I’m also
>> not clear if there are any other startup tasks that take precious time
>> (this also has implications for our fat development images as they take
>> much longer to appear than they really should).
>>
>
> I'm working on this as I'm writing this mail ;)
>
> https://pharo.fogbugz.com/f/cases/20309
> https://github.com/pharo-project/pharo/pull/196
>
> I'll write down the implications further in a different thread.
>
>
>> I’ll be exploring some of these size/speed tradeoff’s in follow on
>> messages.
>>
>> But once again, a big thanks - I’ve not enjoyed programming like this for
>> ages.
>>
>> Tim
>>
>> On 12 Aug 2017, at 16:26, Ben Coman <b...@openinworld.com> wrote:
>>
>> hi Tim,
>>
>> That is.....      AWESOME!
>>
>> Very nice delivery - it flowed well with great narration.
>>
>> I loved @2:17 "this is the interesting piece, because PharoLambda has
>> serialized the execution context of its application and saved it into [my
>> S3 bucket] ... [then on the local machine] rematerializes a debugger [on
>> that context]."
>>
>> There is a clarity in your video presentation that really may intrigue
>> outsiders. As a community we should push this on the usual hacker forums -
>> ycombinator could be a good starting point (but I'm locked out of my
>> account there).
>> An enticing title could be...
>> "Debugging Lambdas by re-materializing saved execution contexts on your
>> local machine."
>>
>> cheers -ben
>>
>> On Fri, Aug 11, 2017 at 3:37 PM, Denis Kudriashov <dionisi...@gmail.com>
>> wrote:
>>
>>> This is cool Tim.
>>>
>>> So what image size you deployed at the end?
>>>
>>> 2017-08-10 15:47 GMT+02:00 Tim Mackinnon <tim@testit.works>:
>>>
>>>> I just wanted to thank everyone for their help in getting my pet
>>>> project further along, so that now I can announce that PharoLambda is now
>>>> working with the V7 minimal image and also supports post mortem debugging
>>>> by saving a zipped fuel context onto S3.
>>>>
>>>> This latter item is particularly satisfying as at a recent serverless
>>>> conference (JeffConf) there was a panel where poor development tools on
>>>> serverless platforms was highlighted as a real problem.
>>>>
>>>> In our community we’ve had these kinds of tools at our fingertips for
>>>> ages - but I don’t think the wider development community has really
>>>> noticed. Debugging something short lived like a Lambda execution is quite
>>>> startling, as the current answer is “add more logging”, and we all know
>>>> that sucks. To this end, I’ve created a little screencast showing this in
>>>> action - and it was pretty cool because it was a real example I encountered
>>>> when I got everything working and was trying my test application out.
>>>>
>>>> I’ve also put a bit of work into tuning the excellent GitLab CI tools,
>>>> so that I can cache many of the artefacts used between different build runs
>>>> (this might also be of interest to others using CI systems).
>>>>
>>>> The Gitlab project is on: https://gitlab.com/macta/PharoLambda
>>>> And the screencast: https://www.youtube.com/watch?v=bNNCT1hLA3E
>>>>
>>>> Tim
>>>>
>>>>
>>>> On 15 Jul 2017, at 00:39, Tim Mackinnon <tim@testit.works> wrote:
>>>>
>>>> Hi - I’ve been playing around with getting Pharo to run well on AWS
>>>> Lambda. It’s early days, but I though it might be interesting to share what
>>>> I’ve learned so far.
>>>>
>>>> Usage examples and code at https://gitlab.com/macta/PharoLambda
>>>>
>>>> With help from many of the folks here, I’ve been able to get a simple
>>>> example to run in 500ms-1200ms with a minimal Pharo 6 image. You can easily
>>>> try it out yourself. This seems slightly better than what the GoLang folks
>>>> have been able to do.
>>>>
>>>> Tim
>>>>
>>>>
>>>>
>>>
>>
>>
>
>
> --
>
> Guille Polito
>
> Research Engineer
> French National Center for Scientific Research - *http://www.cnrs.fr*
> <http://www.cnrs.fr/>
>
>
> *Web:* *http://guillep.github.io* <http://guillep.github.io/>
> *Phone: *+33 06 52 70 66 13 <+33%206%2052%2070%2066%2013>
>
>
>


-- 



Guille Polito


Research Engineer

French National Center for Scientific Research - *http://www.cnrs.fr*
<http://www.cnrs.fr>



*Web:* *http://guillep.github.io* <http://guillep.github.io>

*Phone: *+33 06 52 70 66 13

Reply via email to