On 21-Apr-09, at 12:07 PM, Stéphane Ducasse wrote:

>
>
> Begin forwarded message:
>
>> From: stepken <[email protected]>
>> Date: April 21, 2009 12:59:15 PM CEDT
>> To: [email protected]
>> Subject: Garbage Avoidance Techniques  ... in Squeak?
>>
>> Hi!
>>
>> Just reviewing some old ideas
>>
>> Frank Lesser (programming the Dolphin NG VM) in our last session
>> mentioned "garbage avoidance":
>>
>> "Garbage not produced hasn't to be garbage collected!"

Ah, well that would be slide 80 at 
http://www.smalltalkconsulting.com/papers/GCPaper/GCTalk%202001.htm

That is hard. I recall last century talking in the lobby of the NY  
Marriot with the late David N Smith and someone
came up with an GC issue. Eliot got involved and the issue of the 48MB  
spike came down to something like

1 to 14000000 do: [:i | fi := foo+fum].

it turned out an intermediate float object was being created for each  
cycle thru the loop, that would get earlier tenured
then bloat OldSpace by 48MB, then finally a full GC would collect all  
those dead floats.


>>
>>
>> Therefore the Debugger has to be adapted for being able to debug
>> garbage. Sounds strange, but could be very informative even for a
>> programmer ;-)
>>

Well you could build a VM that tracks what objects get *removed* when  
the compacting cycle runs in the GC logic.
After the mark trace in the compacting cycle we skip over the non- 
referenced objects since we know they are dead, no
reason you couldn't collect data on what they were. You can't refer to  
them, later since they are about to be destroyed.

Or you subclass your entire domain model (etc) off weak object then  
collect info at finalization time, very expensive...


>> I think, that this - in long term -  results tremendous speedups
>> over polymorphic inline caching and other "tricks" and ... the
>> object model has to be changed for being able just to throw away
>> garbage without explicitly collecting.

That technically would be reference counting, I refer you to the  
history books to understand how hard, issues with cycles, the cost.
Mind the cost today could be free because desktop CPUs today would  
erase the cost of the object reference increment/decrement
via instruction rescheduling and multicore processing. The hard part &  
cycles, still is an issue.

--
= 
= 
= 
========================================================================
John M. McIntosh <[email protected]>
Corporate Smalltalk Consulting Ltd.  http://www.smalltalkconsulting.com
= 
= 
= 
========================================================================




_______________________________________________
Pharo-project mailing list
[email protected]
http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project

Reply via email to