Peter,

The point is that “cleaning the image” is 

 - not reproducible. A cleanup script may work with today’s image but may not 
work with tomorrow’s one.
 - It’s not only about cleaning. Imagine you observe that the tables keeping 
Unicode mappings are wrong. And there is no code to reproduce them.
 - it does not scale when it starts to become circular. For example, you want 
to refactor a part of morphic => but you are use morphic while the refactor is 
performed. We do that all the time, but at the cost of splitting and staging 
changes that require to be atomic, which adds more complexity to the process.


Craig,

Yes, Spoon was indeed an inspiration. However, some notes on top of your 
comments:

- Oz/Espell's existing implementation works by introducing two object memories 
in the same VM, yes. However, the goal was to abstract Espell’s user from such 
detail and have other implementations such as a remote image one. For example, 
I wrote a prototype of Espell that manipulated an image loaded in the 
simulator. The result is that you had a high-level API provided by mirrors to 
manipulate an image without requiring a VM change. However it was also much 
slower, and required adapting the VM simulator to initialize it with an empty 
object memory, and being able to call primitives from outside the simulator 
while keeping the stack coherent. I dropped this prototype because it required 
a lot of engineering effort and at the same time Spur advanced really quickly, 
so I did not want to spend time in code that would soon be deprecated. I could 
however check if I can recover the code.

- Also, I would not say that distributed object memories "are simpler" than 
keeping co-existing object memories, or not at least without exploring it’s 
flaws also :). A distributed object memory forces you to include in every peer 
that may receive code a distribution layer (at least sockets, class builder, 
compiler or serializer + dependencies). And we wanted to go further. We wanted 
to be able to easily manipulate an image that does not have those features.

- The main point was to use the same infrastructure to build an image in an 
explicit manner (the bootstrap) as well as in an implicit on-demand manner 
(tornado). And I believe that the bootstrap is a more transparent process where 
we can control what happens.

Guille

> On 21 ene 2016, at 2:34 p.m., Craig Latta <[email protected]> wrote:
> 
> 
> Hi Christophe--
> 
>>> Another approach is to modify the virtual machine so that it marks
>>> methods as they are run, and modify the garbage collector to reclaim
>>> methods that haven't been run. Then you can create systems that
>>> consist of only what is necessary to run unit tests, effectively
>>> imprinting the unit tests. You can interact with the target system
>>> from completely independent one over a remote messaging network
>>> connection, so your unit tests need not include graphics support,
>>> etc.[1] This seems much simpler to me than making a virtual machine
>>> that can run multiple object memories, and distributed object
>>> memories have several other important uses too.
>> 
>> Guillermo also implemented this kind of approach with Tornado.
> 
>     Right, I read that in his PhD thesis. That's why I mentioned my
> earlier work, when others claimed there was a precedent being set.
> 
>> But is not so easy...
> 
>     Sure, I learned that first-hand when I did it. One important thing
> I learned is that it's easier and more accurate to install code as a
> side-effect of actually running it live, and not through analysis. It's
> also useful to have the option of faulting code in when it's missing
> from a running target, or pushing it in from a running source.
> 
> 
> -C
> 
> [1] http://netjam.org/context
> 
> --
> Craig Latta
> netjam.org
> +31   6 2757 7177 (SMS ok)
> + 1 415  287 3547 (no SMS)
> 
> 


Reply via email to