> The cyclic GC kicks in when memory is running low.
When what memory is running low? Its default pool? System memory?
Justin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail
Hello,
I've been doing some tests on removing the GIL, and it's becoming clear that
some basic changes to the garbage collector may be needed in order for this
to happen efficiently. Reference counting as it stands today is not very
scalable.
I've been looking into a few options, and I'm leaning
On 9/18/07, Krishna Sankar <[EMAIL PROTECTED]> wrote:
>
> Folks,
>As a follow-up to the py3k discussions started by Bruce and Guido, I
> pinged Brett and he suggested I submit an exploratory proposal. Would
> appreciate insights, wisdom, the good, the bad and the ugly.
I am currently working
I'm not sure I understand entirely what you're saying, but it sounds like
you want multiple reference counts. A reference count per thread might not
be a bad idea, but I can't think of how it would work without locks. If
every object has an array of reference counts, then the GC would need to
lock
Your idea can be combined with the maxint/2 initial refcount for
> non-disposable objects, which should about eliminate thread-count updates
> for them.
> --
>
I don't really like the maxint/2 idea because it requires us to
differentiate between globals and everything else. Plus, it's a hack. I'd
On 9/14/07, Adam Olsen <[EMAIL PROTECTED]> wrote:
> > Could be worth a try. A first step might be to just implement
> > the atomic refcounting, and run that single-threaded to see
> > if it has terribly bad effects on performance.
>
> I've done this experiment. It was about 12% on my box. Later,
On 9/13/07, Greg Ewing <[EMAIL PROTECTED]> wrote:
>
> Jason Orendorff wrote:
> > The clever bit is that SpiderMonkey's per-object
> > locking does *not* require a context switch or even an atomic
> > instruction, in the usual case where an object is *not* shared among
> > threads.
>
> How does it t
On 9/13/07, Adam Olsen <[EMAIL PROTECTED]> wrote:
>
>
> Basically though, atomic incref/decref won't work. Once you've got
> two threads modifying the same location the costs skyrocket. Even
> without being properly atomic you'll get the same slowdown on x86
> (who's cache coherency is fairly str
On 9/13/07, Jason Orendorff <[EMAIL PROTECTED]> wrote:
>
> On 9/13/07, Justin Tulloss <[EMAIL PROTECTED]> wrote:
> > 1. Use message passing and transactions. [...]
> > 2. Do it perl style. [...]
> > 3. Come up with an elegant way of handling multiple python
> What do you think?
>
I'm going to have to agree with Martin here, although I'm not sure I
understand what you're saying entirely. Perhaps if you explained where the
benefits of this approach come from, it would clear up what you're thinking.
After a few days of thought, I'm starting to realize
On 9/11/07, "Martin v. Löwis" <[EMAIL PROTECTED]> wrote:
>
> > 1. Some global interpreter state/modules are protected (where are these
> > globals at?)
>
> It's the interpreter and thread state itself (pystate.h), for the thread
> state, also _PyThreadState_Current. Then there is the GC state, in
>
Hi,
I had a whole long email about exactly what I was doing, but I think I'll
get to the point instead. I'm trying to implement a python concurrency API
and would like to use cpython to do it. To do that, I would like to remove
the GIL.
So, since I'm new to interpreter hacking, some help would be
12 matches
Mail list logo