On 8/17/2010 11:20 AM, Standish P wrote:
On Aug 17, 1:17 am, torb...@diku.dk (Torben Ægidius Mogensen) wrote:
Standish P<stnd...@gmail.com>  writes:
[Q] How far can stack [LIFO] solve do automatic garbage collection and
prevent memory leak ?

Because a stack has push and pop, it is able to release and allocate
memory. We envisage an exogenous stack which has malloc() associated
with a push and free() associated with a pop.

See

How many programmers have applied the ideas of these papers in their
programming practice ? I paste the abstract for convenience

http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.23.5498

Abstract:
This paper describes a memory management discipline for programs that
perform dynamic memory allocation and de-allocation. At runtime, all
values are put into regions. The store consists of a stack of regions.
All points of region allocation and deallocation are inferred
automatically, using a type and effect based program analysis. The
scheme does not assume the presence of a garbage collector.

    That's actually an interesting idea.  If you can figure out object
lifetimes at compile time, allocation can be made far more efficient.

    One of the basic questions is whether a function will ever "keep"
an object.  That is, will the function ever keep a reference to an
object that outlives the return from the function?  In many cases,
one can easily determine at compile time that a function will never
keep a passed object.  In such a case, you don't have to do reference
count updates on the object.  Most math functions have this property,
even vector and matrix math functions - they have no persistent state.

    Python could use a bit of this.  If a function argument can
be identified as "non-kept", then the function doesn't need to
do reference count updates on it.  If a local variable in
a function is used only by "non-keep" functions and operations,
it can be created on the stack and released cheaply at block exit.

    One can go much further in lifetime inference than this, as
the papers demonstrate.  There's a big win in the simple optimization
of identifying "non-keep" parameters, especially in mathematical work
where they're very common.  It's not clear that getting fancier than
that is a win.

    Does Shed Skin have this optimization?  It should.

                                        John Nagle
--
http://mail.python.org/mailman/listinfo/python-list

Reply via email to