Hi Guille--

> The point is that “cleaning the image” is not reproducible. A cleanup
> script may work with today’s image but may not work with tomorrow’s.

     I consider this sort of cleaning something that gets done once per
preexisting Smalltalk implementation (Pharo, VisualWorks, etc.). After
it's done, I would build every release artifact up from the identified
essentials (minimal object memory). You need a good module system for
that, and I think one based on live remote messaging instead of source
code stored in files would be best (easiest to build and use, and most
accurate). With the knowledge of those essentials, I would create new
Smalltalk implementations from a system tracer informed by them.

     Cleaning isn't something I would expect to do routinely as an app
developer. It's primarily a research activity.

> [Cleaning] does not scale when it starts to become circular. For
> example, you want to refactor a part of morphic => but you are use
> morphic while the refactor is performed. We do that all the time, but
> at the cost of splitting and staging changes that require to be
> atomic, which adds more complexity to the process.

     Of course, this is another argument for using remote tools.

> ...I wrote a prototype of Espell that manipulated an image loaded in
> the simulator. The result is that you had a high-level API provided
> by mirrors to manipulate an image without requiring a VM change.

     I think that's definitely something you want to be able to do,
since there are some changes that can only be made when time is stopped
for the target image. And the simulator is simply the most pleasant tool
for it. The implementation of this that I did was especially useful for
making complimentary documentation, like the directed-graph movies at
[1]. True, the simulator is slow, and making movie frames with it is
glacial. :)

     We should never be afraid to change the virtual machine, though, in
the absence of any other constraint. :)

> A distributed object memory forces you to include in every peer that
> may receive code a distribution layer (at least sockets, class
> builder, compiler or serializer + dependencies). And we wanted to go
> further. We wanted to be able to easily manipulate an image that does
> not have those features.

     Right, I use the simulator stuff when I care about leaving those
things out (e.g. [2]). But all of the systems I actually want to deploy
do have those things, to the point that I do consider them to be as
fundamental as anything else. And they don't include the class builder
or compiler; the remote messaging protocol I wrote gets by with just
sockets, a few collection classes, and the reflection primitives that
the virtual machine needs anyway. There is never a need to compile
source code.

> The main point was to use the same infrastructure to build an image
> in an explicit manner (the bootstrap) as well as in an implicit
> on-demand manner (tornado). And I believe that the bootstrap is a
> more transparent process where we can control what happens.

     It seems to me that push imprinting, where methods are transferred
from one system to another as a side-effect of running them, is just as
transparent and controllable, and gives you tools that you want anyway
for other purposes.


     thanks,

-C

[1] http://netjam.org/context/viz
[2] http://netjam.org/context/smallest

--
Craig Latta
netjam.org
+31 6 2757 7177 (SMS ok)
+ 1 415 287 3547 (no SMS)



Reply via email to