[issue7946] Convoy effect with I/O bound threads and New GIL

2022-03-21 Thread Sophist


Sophist  added the comment:

> https://docs.google.com/document/d/18CXhDb1ygxg-YXNBJNzfzZsDFosB5e6BfnXLlejd9l0/edit

1. The steering committee hasn't given the go ahead for this yet, and we have 
no idea when such a decision will be made nor whether the decision with be yes 
or no.

2. Even after the decision is made "Removing the GIL will be a large, 
multi-year undertaking with increased risk for introducing bugs and 
regressions."

3. The promised performance gains are actually small by comparison to the 
existing project that Guido is running is hoping to achieve. It is unclear 
whether the no-gil implementation would impact those gains, and if so whether a 
small reduction in the large planned performance gains would actually more than 
wipe out the modest performance gains promised by the no-gil project.

4. Given that this "Removing the GIL will require making trade-offs and taking 
on substantial risk", it is easy to see that this project could come across an 
unacceptable risk or insurmountable technical problem at any point and thus 
fall by the wayside.

Taking all of the above points together, I think that there is still merit in 
considering the pros and cons of a GIL scheduler.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2022-03-21 Thread Guido van Rossum


Guido van Rossum  added the comment:

Start here: 
https://docs.google.com/document/d/18CXhDb1ygxg-YXNBJNzfzZsDFosB5e6BfnXLlejd9l0/edit

AFAICT the SC hasn't made up their minds about this.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2022-03-21 Thread Sophist


Sophist  added the comment:

> I think that we should focus our efforts on removing the GIL, now that we 
> have a feasible solution for doing so without breaking anything

Is this really a thing? Something that is definitely happening in a reasonable 
timescale?

Or are there some big compatibility issues likely to rear up and at best create 
delays, and at worse derail it completely?

Can someone give me some links about this please?

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2022-03-21 Thread Omer Katz


Omer Katz  added the comment:

I think that we should focus our efforts on removing the GIL, now that we
have a feasible solution for doing so without breaking anything (hopefully)
however the removal of the GIL is still far from being complete and will
need to be rebased upon the latest Python version to be merged.

This issue would probably hurt Celery since some users use it with a thread
pool and it uses a few threads itself but it seems like fixing it is too
much effort so if we were to invest a lot of effort, I'd focus on removing
the problem entirely.

Instead, I suggest we document this with a warning in the relevant place so
that people would know to avoid or workaround the problem entirely.

On Mon, Mar 21, 2022, 20:32 Guido van Rossum  wrote:

>
> Change by Guido van Rossum :
>
>
> --
> nosy: +gvanrossum
>
> ___
> Python tracker 
> 
> ___
>

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2022-03-21 Thread Guido van Rossum


Change by Guido van Rossum :


--
nosy: +gvanrossum

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2022-03-21 Thread Sophist


Sophist  added the comment:

Please see also https://github.com/faster-cpython/ideas/discussions/328 for a 
proposal for a simple (much simpler than BFS) GIL scheduler only allocating the 
GIL between runable O/S threads waiting to have ownership of the GIL, and using 
the O/S scheduler for scheduling the threads.

--
nosy: +Sophist

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2021-12-05 Thread Mark Dickinson


Change by Mark Dickinson :


--
nosy: +mark.dickinson

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2021-01-18 Thread STINNER Victor


Change by STINNER Victor :


--
nosy:  -vstinner

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2021-01-16 Thread Charles-François Natali

Change by Charles-François Natali :


--
nosy:  -neologix

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2021-01-15 Thread Maarten Breddels


Maarten Breddels  added the comment:

In case someone finds it useful, I've written a blog post on how to visualize 
the GIL:
https://www.maartenbreddels.com/perf/jupyter/python/tracing/gil/2021/01/14/Tracing-the-Python-GIL.html

In the comments (or at 
https://github.com/maartenbreddels/fastblog/issues/3#issuecomment-760891430 )
I looked specifically at the iotest.py example (no solutions though).

--
nosy: +maartenbreddels

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2021-01-14 Thread Stuart Axon


Stuart Axon  added the comment:

Catching up on the comments on this, it seems like nobody has enough certainty 
to say it will work well enough.

In Linux, the scheduler is pluggable, which lets other non-default schedulers 
be shipped and tried in the real world.

- See schedutil, introduced in Linux 4.7 in 2016 default for some architectures 
in 2020, in 2021 still not default everywhere.

Similarly I think this needs more testing than it will get living here as a 
bug.   If, like Linux the scheduler was pluggable, it could be shipped and 
enabled by real users that were brave enough and data could be collected.

--
nosy: +stuaxo

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2021-01-02 Thread Gregory P. Smith


Change by Gregory P. Smith :


--
resolution:  -> wont fix

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2021-01-01 Thread David Beazley


Change by David Beazley :


--
stage:  -> resolved
status: open -> closed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2020-10-08 Thread STINNER Victor


STINNER Victor  added the comment:

If someone wants to close this issue, I suggest to write a short section in the 
Python documentation to give some highlights on the available options and 
stategies to maximize performances and list drawbacks of each method. Examples:

* Multiple threads (threading): limited by the GIL
* Multiple processes (concurrent.futures, multiprocessing, distributed 
application): limited by shared data
* Concurrent programming (asyncio): limited to 1 thread

These architectures are not exclusive. asyncio can use multiple threads and be 
distributed in multiple processes.

I would be bad to go too deep into the technical details, but I think that we 
can describe some advantages and drawbacks which are common on all platforms.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2020-10-07 Thread Dima Tisnek

Dima Tisnek  added the comment:

My 2c as Python user:

Back in 2010, I've used multithreading extensively, both for concurrency and 
performance. Others used multiprocessing or just shelled out. People talked 
about using **the other** core, or sometimes the other socket on a server.

Now in 2020, I'm using asyncio exclusively. Some colleagues occasionally still 
shell out . None talking about using all cores on a single machine, rather, 
we'd spin up dozens of identical containers, which are randomly distributed 
across N machines, and the synchronisation is offloaded to some database (e.g. 
atomic ops in redis; transactions in sql).

In my imagination, I see future Python as single-threaded (from user's point of 
view, that is without multithreading api), that features speculative 
out-of-order async task execution (using hardware threads, maybe pinned) that's 
invisible to the user.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2020-10-07 Thread Gregory P. Smith


Gregory P. Smith  added the comment:

It's a known issue and has been outlined very well and still comes up from time 
to time in real world applications, which tend to see this issue and Dave's 
presentation and just work around it in any way possible for their system and 
move on with life.

Keeping it open even if nobody is actively working on it makes sense to me as 
it is still a known issue that could be resolved should someone have the 
motivation to complete the work.

--
versions: +Python 3.10 -Python 3.9

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2020-10-03 Thread David Beazley


David Beazley  added the comment:

About nine years ago, I stood in front of a room of Python developers, 
including many core developers, and gave a talk about the problem described in 
this issue.  It included some live demos and discussion of a possible fix. 

https://www.youtube.com/watch?v=fwzPF2JLoeU

Based on subsequent interest, I think it's safe to say that this issue will 
never be fixed.  Probably best to close this issue.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2020-10-02 Thread Larry Hastings


Larry Hastings  added the comment:

FWIW: I think David's cited behavior proves that the GIL is de facto a 
scheduler.  And, in case you missed it, scheduling is a hard problem, and not a 
solved problem.  There are increasingly complicated schedulers with new 
approaches and heuristics.  They're getting better and better... as well as 
more and more complex.  BFS is an example of that trend from ten years ago.  
But the Linux kernel has been shy about merging it, not sure why (technical 
deficiency? licensing problem? personality conflict? the name?).

I think Python's current thread scheduling approach is almost certainly too 
simple.  My suspicion is that we should have a much more elaborate 
scheduler--which hopefully would fix most (but not all!) of these sorts of 
pathological performance regressions.  But that's going to be a big patch, and 
it's going to need a champion, and that champion would have to be more educated 
about it than I am, so I don't think it's gonna be me.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2020-10-02 Thread Dong-hee Na


Change by Dong-hee Na :


--
nosy: +corona10

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2020-10-02 Thread Joshua Bronson


Change by Joshua Bronson :


--
nosy: +jab

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2019-09-03 Thread Dirkjan Ochtman


Change by Dirkjan Ochtman :


--
nosy:  -djc

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2019-07-10 Thread Gregory P. Smith


Gregory P. Smith  added the comment:

I suggest:
 (1) turning one of the patches (probably the last BFS one?) into a PR against 
the github master branch (3.9) and,
 (2) if none of the existing pyperformance workloads already demonstrates the 
problems with the existing GIL implementation, adopt one of the benchmarks here 
into an additional pyperformance workload.
 (3) demonstrate pyperformance results (overall and targeted tests) before and 
after the change.

Simultaneously, it'd also be interesting to see someone create an alternate PR 
using a PyPy inspired GIL implementation as that could prove to be a lot easier 
to maintain.

Lets make a data driven decision here.

People lost interest in actually landing a fix to this issue in the past 
because it wasn't impacting their daily lives or applications (or if it was, 
they already adopted a workaround).  Someone being interested enough to do the 
work to justify it going in is all it should take to move forward.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2019-07-10 Thread Gregory P. Smith


Gregory P. Smith  added the comment:

(unassigning as it doesn't make sense to assign to anyone unless they're 
actually working on it)

--
assignee: pitrou -> 

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2019-07-10 Thread Gregory P. Smith


Change by Gregory P. Smith :


--
priority: low -> normal
versions: +Python 3.9 -Python 3.3

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2019-07-10 Thread Omer Katz


Omer Katz  added the comment:

FYI I can verify that the original benchmark is still valid on Python 3.7.3.
I'm running the client on an 8 core CPU.
The result is 30.702 seconds (341534.322 bytes/sec).

I'll need somebody to decide how we're going to fix this problem.
I can do the legwork.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2019-06-25 Thread Armin Rigo


Armin Rigo  added the comment:

Note that PyPy has implemented a GIL which does not suffer from this problem, 
possibly using a simpler approach than the patches here do.  The idea is 
described and implemented here:

https://bitbucket.org/pypy/pypy/src/default/rpython/translator/c/src/thread_gil.c

--
nosy: +arigo

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2019-06-24 Thread Omer Katz


Omer Katz  added the comment:

Celery 5 is going async and in order to isolate the main event loop from task 
execution, the tasks are going to be executed in a different thread with it's 
own event loop.

This thread may or may not be CPU bound.
The main thread is I/O bound.

This patch should help a lot.

I like Nir's approach a lot (although I haven't looked into the patch itself 
yet). It's pretty novel.
David's patch is also very interesting.

I'm willing to help.

--
nosy: +Omer.Katz

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2014-09-03 Thread Stefan Behnel

Changes by Stefan Behnel sco...@users.sourceforge.net:


--
nosy: +scoder

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2014-07-15 Thread Dima Tisnek

Dima Tisnek added the comment:

What happened to this bug and patch?

--
nosy: +Dima.Tisnek

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2014-07-15 Thread Antoine Pitrou

Antoine Pitrou added the comment:

Not much :) The patch is complex and the issue hasn't proved to be 
significant in production code.
Do you have a (real-world) workload where this shows up?

Le 15/07/2014 09:52, Dima Tisnek a écrit :

 Dima Tisnek added the comment:

 What happened to this bug and patch?

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2012-03-26 Thread STINNER Victor

STINNER Victor victor.stin...@gmail.com added the comment:

 gettimeofday returns you wall clock time: if a process
 that modifies time is running, e.g. ntpd, you'll likely
 to run into trouble. the value returned is _not_ monotonic,
 ...

The issue #12822 asks to use monotonic clocks when available.

--
nosy: +haypo

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2011-06-08 Thread Julian Mehnle

Changes by Julian Mehnle jul...@mehnle.net:


--
nosy: +jmehnle

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2011-01-03 Thread Antoine Pitrou

Changes by Antoine Pitrou pit...@free.fr:


--
priority: high - low
versions:  -Python 3.2

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-07-13 Thread P. Henrique Silva

Changes by P. Henrique Silva ph.si...@gmail.com:


--
nosy: +phsilva

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-07-10 Thread Hans Lellelid

Changes by Hans Lellelid h...@velum.net:


--
nosy: +hozn

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-05-30 Thread Nir Aides

Changes by Nir Aides n...@winpdb.org:


Removed file: http://bugs.python.org/file17356/bfs.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-05-30 Thread Nir Aides

Nir Aides n...@winpdb.org added the comment:

Updated bfs.patch with BSD license and copyright notice.

! Current version patches cleanly and builds with Python revision svn r81201.

Issue 7946 and proposed patches were put on hold indefinitely following this 
python-dev discussion: 
http://mail.python.org/pipermail/python-dev/2010-May/100115.html

I would like to thank the Python developer community and in particular David 
and Antoine for a most interesting ride.

Any party interested in sponsoring further development or porting patch to 
Python 2.x is welcome to contact me directly at n...@winpdb.org

Nir

--
Added file: http://bugs.python.org/file17504/bfs.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-05-30 Thread Gregory P. Smith

Gregory P. Smith g...@krypto.org added the comment:

Thanks for all your work Nir!  I personally think the BFS approach is the best 
we've seen yet for this problem!

Having read the thread you linked to in full (ignoring the tagents  
bikeshedding and mudslinging that went on there), it sounds like the general 
consensus is that we should take thread scheduling changes slowly and let the 
existing new implementation bake in the 3.2 release.  That puts this issue as a 
possibility for 3.3 if users demonstrate real world application problems in 3.2.

(personally I'd say it is already obvious that there are problems an wde should 
go ahead with your BFS based approach but realistically the we're still better 
off in 3.2 than we were in 3.1 and 2.x as is)

--
versions: +Python 3.3

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-05-18 Thread Peter Portante

Changes by Peter Portante peter.a.porta...@gmail.com:


--
nosy: +portante

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-05-18 Thread Michele

Michele vo.s...@gmail.com added the comment:

Attached ccbench-osx.log made today on OSX on latest svn checkout. Hope it helps

--
nosy: +Michele
Added file: http://bugs.python.org/file17393/ccbench-osx.log

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-05-17 Thread Victor Godoy Poluceno

Changes by Victor Godoy Poluceno victorpoluc...@gmail.com:


--
nosy: +victorpoluceno

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-05-16 Thread Nir Aides

Changes by Nir Aides n...@winpdb.org:


Added file: http://bugs.python.org/file17370/nir-ccbench-linux.log

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-05-16 Thread Nir Aides

Nir Aides n...@winpdb.org added the comment:

A link to ccbench results comparing old GIL, old GIL with long check interval, 
new GIL and BFS:
http://bugs.python.org/file17370/nir-ccbench-linux.log

Summary:

Results for ccbench latency and bandwidth test run on Ubuntu Karmic 64bit, 
q9400 2.6GHz, all Python versions built with computed gotos optimization.

Old GIL:
Hi level of context switching and reduced performance. 
~90ms IO latency with pure Python CPU bound background threads and low IO 
bandwidth results.

Old GIL with sys.setcheckinterval(2500) as done by Zope:
Context switching level back to normal.
IO latency shoots through the roof. ~950ms (avg) is the maximum recordable 
value in this test since CPU load duration is 2sec.

New GIL:
The expected 5ms wait related IO latency and low IO bandwidth.

BFS patch:
Behaves.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-05-16 Thread Nick Coghlan

Changes by Nick Coghlan ncogh...@gmail.com:


--
nosy: +ncoghlan

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-05-15 Thread Nir Aides

Changes by Nir Aides n...@winpdb.org:


Removed file: http://bugs.python.org/file17330/bfs.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-05-15 Thread Nir Aides

Nir Aides n...@winpdb.org added the comment:

Updated bfs.patch to patch cleanly updated py3k branch. Use:
$ patch -p1  bfs.patch

--
Added file: http://bugs.python.org/file17356/bfs.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-05-14 Thread Nir Aides

Changes by Nir Aides n...@winpdb.org:


Removed file: http://bugs.python.org/file17195/bfs.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-05-14 Thread Nir Aides

Nir Aides n...@winpdb.org added the comment:

Duck, here comes another update to bfs.patch.

This one with some cleanups which simplify the code and improve behavior (on 
Windows XP), shutdown code, comments, and experimental use of TSC for 
timestamps, which eliminates timestamp reading overhead.

TSC (http://en.wikipedia.org/wiki/Time_Stamp_Counter) is a fast way to get high 
precision timing read. On some systems this is what gettimeofday() uses under 
the hood while on other systems it will use HPET or another source which is 
slower, typically ~1usec, but can be higher (e.g. my core 2 duo laptop 
occasionally goes into a few hours of charging 3usec per HPET gettimeofday() 
call - god knows why)

This overhead is incurred twice for every GIL release/acquire pair and can be 
eliminated with:
1) Hack the scheduler not to call gettimeofday() when no other threads are 
waiting to run, or
2) Use TSC on platforms it is available (the Linux BFS scheduler uses TSC).

I took cycle.h pointed by the Wikipedia article on TSC for a spin and it works 
well on my boxes. It is BSD, (un)maintained? and includes implementation for a 
gazillion of platforms (I did not yet modify configure.in as it recommends). 

If it breaks on your system please ping with details.

Some benchmarks running (Florent's) writes.py on Core 2 Quad q9400 Ubuntu 64bit:

bfs.patch - 35K context switches per second, threads balanced, runtime is 3 
times that of running IO thread alone:

~/dev/python$ ~/build/python/bfs/python writes.py
t1 1.60293507576 1
t2 1.78533816338 1
t1 2.88939499855 2
t2 3.19518113136 2
t1 4.38062310219 3
t2 4.70725703239 3
t1 6.26874804497 4
t2 6.4078810215 4
t1 7.83273100853 5
t2 7.92976212502 5
t1 9.4341750145 6
t2 9.57891893387 6
t1 11.077393055 7
t2 11.164755106 7
t2 12.8495900631 8
t1 12.8979620934 8
t1 14.577999115 9
t2 14.5791089535 9
t1 15.9246580601 10
t2 16.1618289948 10
t1 17.365830183 11
t2 17.7345991135 11
t1 18.9782481194 12
t2 19.2790091038 12
t1 20.4994370937 13
t2 20.5710251331 13
21.0179870129


dabeaz_gil.patch - sometimes runs well but sometimes goes into high level of 
context switches (250K/s) and produces output such as this:

~/dev/python$ ~/build/python/dabeaz/python writes.py 
t1 0.742760896683 1
t1 7.50052189827 2
t2 8.63794493675 1
t1 10.1924870014 3
17.9419858456

gilinter2.patch - 300K context switches per second, bg threads starved:

~/dev/python$ ~/build/python/gilinter/python writes.py 
t2 6.1153190136 1
t2 11.7834780216 2
14.5995650291

--
Added file: http://bugs.python.org/file17330/bfs.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-05-03 Thread Nir Aides

Changes by Nir Aides n...@winpdb.org:


Added file: http://bugs.python.org/file17194/nir-ccbench-xp32.log

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-05-03 Thread Nir Aides

Changes by Nir Aides n...@winpdb.org:


Removed file: http://bugs.python.org/file16967/bfs.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-05-03 Thread Nir Aides

Nir Aides n...@winpdb.org added the comment:

I updated bfs.patch with improvements on Windows XP. 

The update disables priority boosts associated with the scheduler condition on 
Windows for CPU bound threads.

Here is a link to ccbench results:
http://bugs.python.org/file17194/nir-ccbench-xp32.log

Summary:

Windows XP 32bit q9400 2.6GHz Release build (no PG optimizations).
Test runs in background, ccbench modified to run both bz2 and sha1.

bfs.patch - seems to behave.

gilinter2.patch
single core: high latency, low IO bandwidth.

dabeaz_gil.patch 
single core: low IO bandwidth.
4 cores: throughput threads starvation (balance), some latency, low IO 
bandwidth.

--
Added file: http://bugs.python.org/file17195/bfs.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-04-30 Thread Nir Aides

Nir Aides n...@winpdb.org added the comment:

Dave, 

The behavior of your patch on Windows XP/2003 (and earlier) might be related to 
the way Windows boosts thread priority when it is signaled. 

Try to increase priority of monitor thread and slice size. Another thing to 
look at is how to prevent Python CPU bound threads from (starving) messing up 
scheduling of threads of other processes. Maybe increasing slice significantly 
can help in this too (50ms++ ?).

XP/NT/CE scheduling and thread boosting affect all patches and the current GIL 
undesirably (in different ways). Maybe it is possible to make your patch work 
nicely on these systems:
http://www.sriramkrishnan.com/blog/2006/08/tale-of-two-schedulers-win_115489794858863433.html

Vista and Windows 7 involve CPU cycle counting which results in more sensible 
scheduling:
http://technet.microsoft.com/en-us/magazine/2007.02.vistakernel.aspx

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-04-29 Thread Nir Aides

Nir Aides n...@winpdb.org added the comment:

On Thu, Apr 29, 2010 at 2:03 AM, David Beazley wrote:

 Wow, that is a *really* intriguing performance result with radically 
 different behavior than Unix.  Do you have any ideas of what might be causing 
 it?

Instrument the code and I'll send you a trace.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-04-28 Thread Nir Aides

Nir Aides n...@winpdb.org added the comment:

On Wed, Apr 28, 2010 at 12:41 AM, Larry Hastings wrote:

 The simple solution: give up QPC and use timeGetTime() with 
 timeBeginPeriod(1), which is totally 
 reliable but only has millisecond accuracy at best.

It is preferable to use a high precision clock and I think the code addresses 
the multi-core time skew problem (pending testing).

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-04-28 Thread Nir Aides

Nir Aides n...@winpdb.org added the comment:

Dave, there seems to be some problem with your patch on Windows:

F:\devz:\dabeaz-wcg\PCbuild\python.exe y:\ccbench.py -b
== CPython 3.2a0.0 (py3k) ==
== x86 Windows on 'x86 Family 6 Model 23 Stepping 10, GenuineIntel' ==

--- I/O bandwidth ---

Background CPU task: Pi calculation (Python)

CPU threads=0: 8551.2 packets/s.
CPU threads=1: 26.1 ( 0 %)
CPU threads=2: 26.0 ( 0 %)
CPU threads=3: 37.2 ( 0 %)
CPU threads=4: 33.2 ( 0 %)

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-04-27 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

@dabeaz
I'm getting random segfaults with your patch (even with the last one), pretty 
much everywhere malloc or free is called.
Ater skimming through the code, I think the problem is due to gil_last_holder:
In drop_gil and take_gil, you dereference gil_last_holder-cpu_bound, but it 
might very well happen that gil_last_holder points to a thread that has been 
deleted (through tstate_delete_common). Dereferencing is not risky, because 
there's a high chance that the address is still valid, but in drop_gil, you do 
this:
/* Make the thread as CPU-bound or not depending on whether it was forced 
off */
gil_last_holder-cpu_bound = gil_drop_request;

Here, if the thread has been deleted in meantine, you end up writting to a 
random location on the heap, and probably corrupting malloc administration 
data, which would explain why I get segfaults sometimes later on unrelated 
malloc() or free() calls.
I looked at it really quickly though, so please forgive me if I missed 
something obvious ;-)

@nirai
I have some more remarks on your patch:
- /* Diff timestamp capping results to protect against clock differences 
 * between cores. */
_LOCAL(long double) _bfs_diff_ts(long double ts1, long double ts0) {

I'm not sure I understand. You can have problem with multiple cores when 
reading directly the TSC register, but that doesn't affect gettimeofday. 
gettimeofday should be reliable and accurate (unless the OS is broken of 
course), the only issue is that since it's wall clock time, if a process like 
ntpd is running, then you'll run into problem
- pretty much all your variables are declared as volatile, but volatile was 
never meant as a thread-synchronization primitive. Since your variables are 
protected by mutexes, you already have all necessary memory barriers and 
synchronization, so volatile just prevents optimization
- you use some funtions just to perform a comparison or substraction, maybe it 
would be better to just remove those functions and perform the 
substractions/comparisons inline (you declared the functions inline but there's 
no garantee that the compiler will honor it).
- did you experiment with the time slice ? I tried some higher values and got 
better results, without penalizing the latency. Maybe it could be interesting 
to look at it in more detail (and on various platforms).

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-04-27 Thread David Beazley

Changes by David Beazley d...@dabeaz.com:


Removed file: http://bugs.python.org/file17102/dabeaz_gil.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-04-27 Thread David Beazley

David Beazley d...@dabeaz.com added the comment:

Added extra pointer check to avoid possible segfault.

--
Added file: http://bugs.python.org/file17104/dabeaz_gil.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com




[issue7946] Convoy effect with I/O bound threads and New GIL

2010-04-27 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

I don't see segfaults anymore, but there's still an unsafe dereference of 
gil_last_holder inside take_gil:

/* Wait on the appropriate GIL depending on thread's classification */
if (!tstate-cpu_bound) {
  /* We are I/O bound.  If the current thread is CPU-bound, force it off 
now! */
  if (gil_last_holder-cpu_bound) {
SET_GIL_DROP_REQUEST();
  }

You're still accessing a location that may have been free()'d previously: while 
it will work most of the time (that's why I said it's not as risky), if the 
page gets unmapped between the time the current thread is deleted and the next 
thread takes over, you'll get a segfault. And that's undefined behaviour anyway 
;-)

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-04-27 Thread donal djeo

donal djeo donaldje...@gmail.com added the comment:

I'm getting random segfaults with your patch (even with the last one), pretty 
much everywhere malloc or free is called.
Ater skimming through the code, I think the problem is due to gil_last_holder:
In drop_gil and take_gil, you dereference gil_last_holder-cpu_bound, but it 
might very well happen that gil_last_holder points to a thread that has been 
deleted (through tstate_delete_common). Dereferencing is not risky, because 
there's a high chance that the address is still valid, but in drop_gil, you do 
this:
/* Make the thread as CPU-bound or not depending on whether it was forced off */
gil_last_holder-cpu_bound = gil_drop_request;

Here, if the thread has been deleted in meantine, you end up writting to a 
random location on the heap, and probably corrupting malloc administration 
data, which would explain why I get segfaults sometimes later on unrelated 
malloc() or free() calls.
I looked at it really quickly though, so please forgive me if I missed 
something obvious ;-)

@nirai
I have some more remarks on your patch:
- /* Diff timestamp capping results to protect against clock differences
* between cores. */
_LOCAL(long double) _bfs_diff_ts(long double ts1, long double ts0) {

I'm not sure I understand. You can have problem with multiple cores when 
reading directly the TSC register, but that doesn't affect gettimeofday. 
gettimeofday should be reliable and accurate (unless the OS is broken of 
course), the a href=http://www.mcpexams.net;mcp/a only issue is that since 
it's wall clock time, if a process like ntpd is running, then you'll run into 
problem
- pretty much all your variables are declared as volatile, but volatile was 
never meant as a thread-synchronization primitive. Since your variables are 
protected by mutexes, you already have all necessary memory barriers and 
synchronization, so volatile just prevents optimization
- you use some funtions just to perform a comparison or substraction, maybe it 
would be better to just remove those functions and perform the 
substractions/comparisons inline (you declared the functions inline but there's 
no garantee that the compiler will honor it).
- did you experiment with the time slice ? I tried some higher values and got 
better results, without penalizing the latency. Maybe it could be interesting 
to look at it in more detail (and on various platforms).

--
components: +None -Interpreter Core
nosy: +donaldjeo

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-04-27 Thread Antoine Pitrou

Changes by Antoine Pitrou pit...@free.fr:


--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-04-27 Thread Antoine Pitrou

Changes by Antoine Pitrou pit...@free.fr:


--
components: +Interpreter Core -None

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-04-27 Thread Guilherme Salgado

Changes by Guilherme Salgado gsalg...@gmail.com:


--
nosy:  -salgado

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-04-27 Thread David Beazley

David Beazley d...@dabeaz.com added the comment:

That second access of gil_last_holder-cpu_bound is safe because that block of 
code is never entered unless some other thread currently holds the GIL.   If a 
thread holds the GIL, then gil_last_holder is guaranteed to have a valid value.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-04-27 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

Didn't have much sleep last night, so please forgive me if I say something 
stupid, but:

Python/pystate.c:
void
PyThreadState_DeleteCurrent()
{
PyThreadState *tstate = _PyThreadState_Current;
if (tstate == NULL)
Py_FatalError(
PyThreadState_DeleteCurrent: no current tstate);
_PyThreadState_Current = NULL;
tstate_delete_common(tstate);
if (autoTLSkey  PyThread_get_key_value(autoTLSkey) == tstate)
PyThread_delete_key_value(autoTLSkey);
PyEval_ReleaseLock();
}

the current tstate is deleted and freed before releasing the GIL, so if another 
thread calls take_gil after the current thread has called tstate_delete_common 
but before it calls PyEval_ReleaseLock (which calls drop_gil and set gil_locked 
to 0), then it will enter this section and dereference gil_last_holder.
I just checked with valgrind, and he also reports an illegal dereference at 
this precise line.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-04-27 Thread David Beazley

David Beazley d...@dabeaz.com added the comment:

I stand corrected.   However, I'm going to have to think of a completely 
different approach for carrying out that functionality as I don't know how the 
take_gil() function is able to determine whether gil_last_holder has been 
deleted or not.   Will think about it and post an updated patch later. 

Do you have any examples or insight you can provide about how these segfaults 
have shown up in Python code?   I'm not able to observe any such behavior on 
OS-X or Linux.  Is this happening while running the ccbench program?  Some 
other program?

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-04-27 Thread David Beazley

Changes by David Beazley d...@dabeaz.com:


Removed file: http://bugs.python.org/file17104/dabeaz_gil.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-04-27 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

 Do you have any examples or insight you can provide about how these segfaults 
 have shown up in Python code?   I'm not able to observe any such behavior on 
 OS-X or Linux.  Is this happening while running the ccbench program?  Some 
 other program?

If you're talking about the first issue (segfaults due to writting to 
gil_last_holder-cpu_bound), it was occuring quite often during ccbench (pretty 
much anywhere malloc/free was called). I'm running a regular dual-core Linux 
box, nothing special.

For the second one, I didn't observe any segfault, I just figured this out 
reading the code and confirmed it with valgrind, but it's much less likely 
because the race window is very short and it also requires that the page is 
unmmaped in between.

If someone really wanted to get segfaults, I guess a good start would be:
- get a fast machine, multi-core is a bonus
- use a kernel with full preemption
- use a lot of threads (-n option with ccbench)
- use purify or valgrind --free-fill option so that you're sure to jump to 
noland if you dereference a previously-free'd pointer

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-04-27 Thread David Beazley

David Beazley d...@dabeaz.com added the comment:

One more attempt at fixing tricky segfaults.   Glad someone had some eagle eyes 
on this :-).

--
Added file: http://bugs.python.org/file17106/dabeaz_gil.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-04-27 Thread Nir Aides

Nir Aides n...@winpdb.org added the comment:

On Tue, Apr 27, 2010 at 12:23 PM, Charles-Francois Natali wrote:

 @nirai
 I have some more remarks on your patch:
 - /* Diff timestamp capping results to protect against clock differences
  * between cores. */
 _LOCAL(long double) _bfs_diff_ts(long double ts1, long double ts0) {

 I'm not sure I understand. You can have problem with multiple cores when 
 reading directly the 
 TSC register, but that doesn't affect gettimeofday. gettimeofday should be 
 reliable and accurate 
 (unless the OS is broken of course), the only issue is that since it's wall 
 clock time, if a process 
 like ntpd is running, then you'll run into problem

I think gettimeofday() might return different results on different cores as 
result of kernel/hardware problems or clock drift issues in VM environments:
http://kbase.redhat.com/faq/docs/DOC-7864
https://bugzilla.redhat.com/show_bug.cgi?id=461640

In Windows the high-precision counter might return different results on 
different cores in some hardware configurations (older multi-core processors). 
I attempted to alleviate these problems by using capping and by using a python 
time counter constructed from accumulated slices, with the assumption that IO 
bound threads are unlikely to get migrated often between cores while running. I 
will add references to the patch docs.

 - did you experiment with the time slice ? I tried some higher values and got 
 better results, 
 without penalizing the latency. Maybe it could be interesting to look at it 
 in more detail (and 
 on various platforms).

Can you post more details on your findings? It is possible that by using a 
bigger slice, you helped the OS classify CPU bound threads as such and improved 
synchronization between BFS and the OS scheduler.

Notes on optimization of code taken, thanks.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-04-27 Thread Antoine Pitrou

Antoine Pitrou pit...@free.fr added the comment:

 I stand corrected.   However, I'm going to have to think of a
 completely different approach for carrying out that functionality as I
 don't know how the take_gil() function is able to determine whether
 gil_last_holder has been deleted or not.

Please note take_gil() currently doesn't depend on the validity of the
pointer. gil_last_holder is just used as an opaque value, equivalent to
a thread id.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-04-27 Thread Larry Hastings

Larry Hastings la...@hastings.org added the comment:

 In Windows the high-precision counter might return different results
 on different cores in some hardware configurations (older multi-core
 processors).

More specifically: some older multi-core processors where the HAL implements 
QueryPerformanceCounter using the TSC from the CPU, and the HAL doesn't keep 
the cores in sync and QPC doesn't otherwise account for it.  This is rare; 
frequently QPC is implemented using another source of time.

But it's true: QPC is not 100% reliable.  QPC can unfortunately jump backwards 
(when using TSC and you switch cores), jump forwards (when using TSC and you 
switch cores, or when using the PCI bus timer on P3-era machines with a 
specific buggy PCI south bridge controller), speed up or slow down (when using 
TSC and not accounting for changing CPU speed via SpeedStep c).  The simple 
solution: give up QPC and use timeGetTime() with timeBeginPeriod(1), which is 
totally reliable but only has millisecond accuracy at best.

http://www.virtualdub.org/blog/pivot/entry.php?id=106
http://support.microsoft.com/default.aspx?scid=KB;EN-US;Q274323;

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-04-26 Thread Antoine Pitrou

Antoine Pitrou pit...@free.fr added the comment:

Dave,

 In the current implementation, threads perform a timed-wait on a
 condition variable.  If time expires and no thread switches have
 occurred, the currently running thread is forced to drop the GIL.

A problem, as far as I can see, is that these timeout sleeps run
periodically, regardless of the actual times at which thread switching
takes place. I'm not sure it's really an issue but it's a bit of a
departure from the ideal behaviour of the switching interval.

 A new attribute 'cpu_bound' is added to the PyThreadState structure.
 If a thread is ever forced to drop the GIL, this attribute is simply
 set True (1).  If a thread gives up the GIL voluntarily, it is set
 back to False (0).  This attribute is used to set up simple scheduling
 (described next).

Ok, so it's not very different, at least in principle, from what
gilinter.patch does, right?
(and actually, the benchmark results look very similar)

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-04-26 Thread David Beazley

David Beazley d...@dabeaz.com added the comment:

Greg,

I like the idea of the monitor suspending if no thread owns the GIL.  Let me 
work on that.   Good point on embedded systems.

Antoine, 

Yes, the gil monitor is completely independent and simply ticks along every 5 
ms.   A worst case scenario is that an I/O bound thread is scheduled shortly 
after the 5ms tick and then becomes CPU-bound afterwards.  In that case, the 
monitor might let it run up to about 10ms before switching it.  Hard to say if 
it's a real problem though---the normal timeslice on many systems is 10 ms so 
it doesn't seem out of line.  

As for the priority part, this patch should have similar behavior to the 
glinter patch except for very subtle differences in thread scheduling due to 
the use of the GIL monitor.  For instance, since threads never time out on the 
condition variable anymore, they tend to cycle execution in a purely 
round-robin fashion.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-04-26 Thread David Beazley

Changes by David Beazley d...@dabeaz.com:


Removed file: http://bugs.python.org/file17084/dabeaz_gil.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-04-26 Thread David Beazley

David Beazley d...@dabeaz.com added the comment:

I've updated the GIL patch to reflect concerns about the monitor thread running 
forever.  This version has a suspension mechanism where the monitor goes to 
sleep if nothing is going on for awhile.  It gets resumed if threads try to 
acquire the GIL, but timeout for some reason.

--
Added file: http://bugs.python.org/file17094/dabeaz_gil.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-04-26 Thread David Beazley

David Beazley d...@dabeaz.com added the comment:

I've also attached a new file schedtest.py that illustrates a subtle difference 
between having the GIL monitor thread and not having the monitor.

Without the monitor, every thread is responsible for its own scheduling.  If 
you have a lot of threads running, you may have a lot of threads all performing 
a timed wait and then waking up only to find that the GIL is locked and that 
they have to go back to waiting.  One side effect is that certain threads have 
a tendency to starve.

For example, if you run the schedtest.py with the original GIL, you get a trace 
where three CPU-bound threads run like this:

Thread-3 16632
Thread-2 16517
Thread-1 31669
Thread-2 16610
Thread-1 16256
Thread-2 16445
Thread-1 16643
Thread-2 16331
Thread-1 16494
Thread-3 16399
Thread-1 17090
Thread-1 20860
Thread-3 16306
Thread-1 19684
Thread-3 16258
Thread-1 16669
Thread-3 16515
Thread-1 16381
Thread-3 16600
Thread-1 16477
Thread-3 16507
Thread-1 16740
Thread-3 16626
Thread-1 16564
Thread-3 15954
Thread-2 16727
...

You will observe that Threads 1 and 2 alternate, but Thread 3 starves.  Then at 
some point, Threads 1 and 3 alternate, but Thread 2 starves. 

By having a separate GIL monitor, threads are no longer responsible for making 
scheduling decisions concerning timeouts.  Instead, the monitor is what times 
out and yanks threads off the GIL.  If you run the same test with the GIL 
monitor, you get scheduling like this:

Thread-1 33278
Thread-2 32278
Thread-3 31981
Thread-1 33760
Thread-2 32385
Thread-3 32019
Thread-1 32700
Thread-2 32085
Thread-3 32248
Thread-1 31630
Thread-2 32200
Thread-3 32054
Thread-1 32721
Thread-2 32659
Thread-3 34150

Threads nicely cycle round-robin.  There also appears to be about half as much 
thread switching (for reasons I don't quite understand).

--
Added file: http://bugs.python.org/file17095/schedtest.py

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-04-26 Thread Nir Aides

Nir Aides n...@winpdb.org added the comment:

Dave, there seems to be a bug in your patch on Windows XP. It crashes in 
ccbench.py with the following output:

python_d.exe y:\ccbench.py
== CPython 3.2a0.0 (py3k) ==
== x86 Windows on 'x86 Family 6 Model 23 Stepping 10, GenuineIntel' ==

--- Throughput ---

Pi calculation (Python)

threads= 1:   840 iterations/s. balance
Fatal Python error: ReleaseMutex(mon_mutex) failed
threads= 2:   704 ( 83%)0.8167
threads= 3:   840 (100%)1.6706
threads= 4:   840 (100%)2.

and the following stack trace:

ntdll.dll!7c90120e()
[Frames below may be incorrect and/or missing, no symbols loaded for 
ntdll.dll] 
python32_d.dll!Py_FatalError(const char * msg)  Line 2033   C
   python32_d.dll!gil_monitor(void * arg)  Line 314 + 0x24 bytes   C
python32_d.dll!bootstrap(void * call)  Line 122 + 0x7 bytes C
msvcr100d.dll!_callthreadstartex()  Line 314 + 0xf bytesC
msvcr100d.dll!_threadstartex(void * ptd)  Line 297  C
kernel32.dll!7c80b729()

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-04-26 Thread David Beazley

Changes by David Beazley d...@dabeaz.com:


Removed file: http://bugs.python.org/file17094/dabeaz_gil.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-04-26 Thread David Beazley

David Beazley d...@dabeaz.com added the comment:

New version of patch that will probably fix Windows-XP problems. Was doing 
something stupid in the monitor (not sure how it worked on Unix).

--
Added file: http://bugs.python.org/file17102/dabeaz_gil.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-04-25 Thread Ray.Allen

Changes by Ray.Allen ysj@gmail.com:


--
nosy: +ysj.ray

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-04-25 Thread David Beazley

David Beazley d...@dabeaz.com added the comment:

The attached patch makes two simple refinements to the new GIL implemented in 
Python 3.2.   Each is briefly described below.

1. Changed mechanism for thread time expiration

In the current implementation, threads perform a timed-wait on a condition 
variable.  If time expires and no thread switches have occurred, the currently 
running thread is forced to drop the GIL.

In the patch, timeouts are now performed by a special GIL monitor thread.  
This thread runs independently of Python and simply handles time expiration.  
Basically, it records the number of thread switches, sleeps for a specified 
interval (5ms), and then looks at the number of thread switches again.  If no 
switches occurred, it forces the currently running thread to drop the GIL.

With this monitor thread, it is no longer necessary to perform any timed 
condition variable waits.  This approach has a few subtle benefits.  First, 
threads no longer sit in a wait/timeout cycle when trying to get the GIL (so, 
there is less overhead).   Second, you get FIFO scheduling of threads.  When 
time expires, the thread that has been waiting the longest on the condition 
variable runs next.  Generally, you want this.

2. A very simple two-level priority mechanism

A new attribute 'cpu_bound' is added to the PyThreadState structure.  If a 
thread is ever forced to drop the GIL, this attribute is simply set True (1).  
If a thread gives up the GIL voluntarily, it is set back to False (0).  This 
attribute is used to set up simple scheduling (described next).

There are now two separate condition variables (gil_cpu_cond) and (gil_io_cond) 
that separate waiting threads according to their cpu_bound attribute setting.  
CPU-bound threads wait on gil_cpu_cond whereas I/O-bound threads wait on 
gil_io_cond. 

Using the two condition variables, the following scheduling rules are enforced:

   - If there are any waiting I/O bound threads, they are always signaled 
first, before any CPU-bound threads.
   - If an I/O bound thread wants the GIL, but a CPU-bound thread is running, 
the CPU-bound thread is immediately forced to drop the GIL.
   - If a CPU-bound thread wants the GIL, but another CPU-bound thread is 
running, the running thread is immediately forced to drop the GIL if its time 
period has already expired.

Results
---
This patch gives excellent results for both the ccbench test and all of my 
previous I/O bound tests.  Here is the output:

== CPython 3.2a0.0 (py3k:80470:80497M) ==
== i386 Darwin on 'i386' ==

--- Throughput ---

Pi calculation (Python)

threads=1: 871 iterations/s.
threads=2: 844 ( 96 %)
threads=3: 838 ( 96 %)
threads=4: 826 ( 94 %)

regular expression (C)

threads=1: 367 iterations/s.
threads=2: 345 ( 94 %)
threads=3: 339 ( 92 %)
threads=4: 327 ( 89 %)

bz2 compression (C)

threads=1: 384 iterations/s.
threads=2: 728 ( 189 %)
threads=3: 695 ( 180 %)
threads=4: 707 ( 184 %)

--- Latency ---

Background CPU task: Pi calculation (Python)

CPU threads=0: 0 ms. (std dev: 0 ms.)
CPU threads=1: 0 ms. (std dev: 0 ms.)
CPU threads=2: 1 ms. (std dev: 2 ms.)
CPU threads=3: 0 ms. (std dev: 1 ms.)
CPU threads=4: 0 ms. (std dev: 1 ms.)

Background CPU task: regular expression (C)

CPU threads=0: 0 ms. (std dev: 0 ms.)
CPU threads=1: 2 ms. (std dev: 1 ms.)
CPU threads=2: 1 ms. (std dev: 1 ms.)
CPU threads=3: 1 ms. (std dev: 1 ms.)
CPU threads=4: 2 ms. (std dev: 1 ms.)

Background CPU task: bz2 compression (C)

CPU threads=0: 0 ms. (std dev: 0 ms.)
CPU threads=1: 0 ms. (std dev: 2 ms.)
CPU threads=2: 2 ms. (std dev: 3 ms.)
CPU threads=3: 0 ms. (std dev: 1 ms.)
CPU threads=4: 0 ms. (std dev: 1 ms.)

--- I/O bandwidth ---

Background CPU task: Pi calculation (Python)

CPU threads=0: 5850.9 packets/s.
CPU threads=1: 5246.8 ( 89 %)
CPU threads=2: 4228.9 ( 72 %)
CPU threads=3: 4222.8 ( 72 %)
CPU threads=4: 2959.5 ( 50 %)

Particular attention should be given to tests involving I/O performance.  In 
particular, here are the results of the I/O bandwidth test using the unmodified 
GIL:

--- I/O bandwidth ---

Background CPU task: Pi calculation (Python)

CPU threads=0: 6007.1 packets/s.
CPU threads=1: 189.0 ( 3 %)
CPU threads=2: 19.7 ( 0 %)
CPU threads=3: 19.7 ( 0 %)
CPU threads=4: 5.1 ( 0 %)

Other Benefits
--
This patch does not involve any complicated libraries, platform specific 
functionality, low-level lock twiddling, or mathematically complex priority 
scheduling algorithms.  Emphasize: The code is simple.

Negative Aspects

This modification might introduce a starvation effect where CPU-bound threads 
never get to run if there is an extremely heavy load of I/O-bound threads 
competing for the GIL.

Comparison to BFS
-
Still need to test. Would be curious.

--
Added file: http://bugs.python.org/file17084/dabeaz_gil.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946

[issue7946] Convoy effect with I/O bound threads and New GIL

2010-04-25 Thread David Beazley

David Beazley d...@dabeaz.com added the comment:

One comment on that patch I just submitted. Basically, it's an attempt to make 
an extremely simple tweak to the GIL that fixes most of the problems discussed 
here in an extremely simple manner.  I don't have any special religious 
attachment to it though.  Would love to see a BFS comparison.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-04-25 Thread David Beazley

David Beazley d...@dabeaz.com added the comment:

Here is the result of running the writes.py test with the patch I submitted.   
This is on OS-X.

bash-3.2$ ./python.exe writes.py
t1 2.83990693092 0
t2 3.27937912941 0
t1 5.54346394539 1
t2 6.68237304688 1
t1 8.9648039341 2
t2 9.60041999817 2
t1 12.1856160164 3
t2 12.5866689682 3
t1 15.3869640827 4
t2 15.7042851448 4
t1 18.4115200043 5
t2 18.5771169662 5
t2 21.4922711849 6
t1 21.6835460663 6
t2 24.6117911339 7
t1 24.9126679897 7
t1 27.1683580875 8
t2 28.2728791237 8
t1 29.4513950348 9
t1 32.2438161373 10
t2 32.5283250809 9
t1 34.8905010223 11
t2 36.0952250957 10
t1 38.109760046 12
t2 39.3465380669 11
t1 41.5758800507 13
t2 42.587772131 12
t1 45.1536290646 14
t2 45.8339021206 13
t1 48.6495029926 15
t2 49.1581180096 14
t1 51.5414950848 16
t2 52.6768190861 15
t1 54.818582058 17
t2 56.1163961887 16
t1 58.1549630165 18
t2 59.6944830418 17
t1 61.4515309334 19
t2 62.7685520649 18
t1 64.3223180771 20
t2 65.8158640862 19
65.8578810692

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-04-25 Thread Gregory P. Smith

Gregory P. Smith g...@krypto.org added the comment:

Nice dabeaz.

One potential concern with dabeaz_gil.patch 2010-04-25 21:13 is that it 
appears to always leave the gil_monitor thread running.  This is bad on 
mobile/embedded platforms where waking up at regular intervals prevents 
advanced sleep states and wastes power/battery.  (practical example: the OLPC 
project has run into this issue in other code in the past)

Could this be modified so that gil_monitor stops looping (blocks) so long as 
there are only IO bound Python threads running or while no python thread owns 
the GIL?

In that situation a multithreaded python process that has either reverted to 
one thread or has all threads blocked in IO would be truly idle rather than 
leaving the gil_monitor polling.

--
nosy: +gregory.p.smith

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-04-20 Thread Andres Moreira

Changes by Andres Moreira elkpich...@gmail.com:


--
nosy: +andrix

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-04-17 Thread Nir Aides

Nir Aides n...@winpdb.org added the comment:

 the scheduling function bfs_find_task returns the first task that 
 has an expired deadline. since an expired deadline probably means 
 that the scheduler hasn't run for a while, it might be worth it to 
 look for the thread with the oldest deadline and serve it first, 
 instead of stopping at the first one

This is by design of BFS as I understand it. Next thread to run is either first 
expired or oldest deadline:

http://ck.kolivas.org/patches/bfs/sched-BFS.txt
Once a task is descheduled, it is put back on the queue, and an
O(n) lookup of all queued-but-not-running tasks is done to determine which has
the earliest deadline and that task is chosen to receive CPU next. The one
caveat to this is that if a deadline has already passed (jiffies is greater
than the deadline), the tasks are chosen in FIFO (first in first out) order as
the deadlines are old and their absolute value becomes decreasingly relevant
apart from being a flag that they have been asleep and deserve CPU time ahead
of all later deadlines.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-04-17 Thread Nir Aides

Changes by Nir Aides n...@winpdb.org:


Removed file: http://bugs.python.org/file16947/bfs.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-04-17 Thread Nir Aides

Nir Aides n...@winpdb.org added the comment:

Yet another update to bfs.patch.

I upload a variation on Florent's write test which prints progress of 
background CPU bound threads as: thread-name timestamp progress

Here are some numbers from Windows XP 32bit with Intel q9400 (4 cores). Builds 
produced with VS express (alas - no optimizations, so official builds may 
behave differently).

BFS - 
33% CPU, 37,000 context switches per second

z:\bfs\PCbuild\python.exe y:\writes.py
t1 2.34400010109 0
t2 2.4213134 0
t1 4.6713134 1
t2 4.7963134 1
t1 7.0163242 2
t2 7.2036866 2
t1 9.375 3
t2 9.625 3
t1 11.703962 4
t2 12.0309998989 4
t1 14.046313 5
t2 14.421313 5
t1 16.407648 6
t2 16.7809998989 6
t1 18.782648 7
t2 19.125 7
t1 21.157648 8
t2 21.483676 8
t1 23.5 9
t2 23.858676 9
t1 25.858951 10
t2 26.233676 10
t1 28.2349998951 11
28.2189998627

gilinter - starves both bg threads and high rate of context switches.
45% CPU, 203,000 context switches per second

z:\gilinter\PCbuild\python.exe y:\writes.py
t1 13.0939998627 0
t1 26.421313 1
t1 39.812638 2
t1 53.1559998989 3
57.5470001698

PyCON - starves one bg thread and slow IO thread 
Py32 - starves IO thread as expected.

Note, PyCON, gilinter and py32 starve the bg thread with Dave's original 
buffered write test as well - http://bugs.python.org/issue7946#msg101116

--
Added file: http://bugs.python.org/file16968/writes.py

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-04-16 Thread Nir Aides

Changes by Nir Aides n...@winpdb.org:


Removed file: http://bugs.python.org/file16830/bfs.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-04-16 Thread Nir Aides

Nir Aides n...@winpdb.org added the comment:

I uploaded an update to bfs.patch which improves behavior in particular on 
non-Linux multi-core (4+) machines. 

Hi Charles-Francois, Thanks for taking the time to review this patch!

 - nothing guarantees that you'll get a msec resolution

Right, the code should behave well with low precision clocks as long as short 
(sub-tick) tasks are not synchronized with the slice interval. There is a 
related discussion of this problem in schedulers in the section on sub-tick 
accounting in: http://ck.kolivas.org/patches/bfs/sched-BFS.txt

On which target systems can we expect not to have high precision clock?

 - gettimeofday returns you wall clock time: if a process that modifies time 
 is running, e.g. ntpd, you'll likely to run into trouble. the value returned 
 is _not_ monotonic, but clock_gettime(CLOCK_MONOTONIC) is
 - inline functions are used, but it's not ANSI
 - static inline long double get_timestamp(void) {
struct timeval tv;
GETTIMEOFDAY(tv);
return (long double) tv.tv_sec + tv.tv_usec * 0.01;
 }

I added timestamp capping to the code. timestamp is used for waiting and 
therefore I think the source should be either CLOCK_REALTIME or gettimeofday().

  `tstate-tick_counter % 1000` is replicating the behaviour of the old GIL, 
  which based its speculative operation on the number of elapsed opcodes (and 
  which also gave bad latency numbers on the regex workload).

 I find this suspicous too. I haven't looked at the patch in detail, but what 
 does the number of elapsed opcodes offers you over the timesplice expiration 
 approach ?

More accurate yielding. It is possible a better mechanism can be thought of 
and/or maybe it is indeed redundant.

 It is thus recommended that a condition wait be enclosed in the equivalent of 
 a while loop that checks the predicate.

Done.

--
Added file: http://bugs.python.org/file16947/bfs.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-04-15 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

Please disregard my remark on COND_TIMED_WAIT not updating timeout_result, it's 
wrong (it's really a macro, not a function...)

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-04-14 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

Some more remarks:
- COND_TIMED_WAIT macro modifies timeout_result when pthread_cond_timewait 
expires. But timeout_result is not an int pointer, just an int. So it is never 
updated, and as a result, bfs_check_depleted is never set after a thread has 
waited for the current running thread to schedule it in vain (in 
_bfs_timed_wait).
- the scheduling function bfs_find_task returns the first task that has an 
expired deadline. since an expired deadline probably means that the scheduler 
hasn't run for a while, it might be worth it to look for the thread with the 
oldest deadline and serve it first, instead of stopping at the first one
- calls to COND_WAIT/COND_TIMED_WAIT should be run in loops checking for the 
predicate, since it might be false even after these call return (spurious 
wakeups, etc):
In general, whenever a condition wait returns, the thread has to re-evaluate 
the predicate associated with the condition wait to determine whether it can 
safely proceed, should wait again, or should declare a timeout. A return from 
the wait does not imply that the associated predicate is either true or false. 

It is thus recommended that a condition wait be enclosed in the equivalent of a 
while loop that checks the predicate.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-04-11 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

A couple remarks on BFS-based patch:
- nothing guarantees that you'll get a msec resolution
- gettimeofday returns you wall clock time: if a process that modifies time is 
running, e.g. ntpd, you'll likely to run into trouble. the value returned is 
_not_ monotonic, but clock_gettime(CLOCK_MONOTONIC) is
- inline functions are used, but it's not ANSI
- static inline long double get_timestamp(void) {
struct timeval tv;
GETTIMEOFDAY(tv);
return (long double) tv.tv_sec + tv.tv_usec * 0.01;
}
the product is computed as double, and then promoted as (long double).
- the code uses a lot of floating point calculation, which is slower than 
integer

Otherwise:
- You know, I almost wonder whether this whole issue could be fixed by just 
adding a user-callable function to optionally set a thread priority number.  
For example:

sys.setpriority(n)

Modify the new GIL code so that it checks the priority of the currently running 
thread against the priority of the thread that wants the GIL.  If the running 
thread has lower priority, it immediately drops the GIL.

The problem with this type of fixed-priority is starvation. And it shouldn't be 
up to the user to set the priorities. And some threads can mix I/O and CPU 
intensive tasks.

 It's a dual-core Linux x86-64 system. But, looking at the patch again, the 
 reason is obvious:

 #define CHECK_SLICE_DEPLETION(tstate) (bfs_check_depleted || (tstate
 tick_counter % 1000 == 0))

 `tstate-tick_counter % 1000` is replicating the behaviour of the old GIL, 
 which based its speculative operation on the number of elapsed opcodes (and 
 which also gave bad latency numbers on the regex workload).

I find this suspicous too. I haven't looked at the patch in detail, but what 
does the number of elapsed opcodes offers you over the timesplice expiration 
approach ?

--
nosy: +neologix

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-04-08 Thread Nir Aides

Changes by Nir Aides n...@winpdb.org:


Removed file: http://bugs.python.org/file16710/bfs.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-04-08 Thread Nir Aides

Nir Aides n...@winpdb.org added the comment:

Uploaded an update.

--
Added file: http://bugs.python.org/file16830/bfs.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-04-05 Thread Thouis (Ray) Jones

Changes by Thouis (Ray) Jones tho...@gmail.com:


--
nosy: +thouis

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-03-31 Thread Nir Aides

Changes by Nir Aides n...@winpdb.org:


Removed file: http://bugs.python.org/file16680/bfs.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-03-31 Thread Nir Aides

Nir Aides n...@winpdb.org added the comment:

I upload a new update to bfs.patch which improves scheduling and reduces 
overhead.

--
Added file: http://bugs.python.org/file16710/bfs.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7946] Convoy effect with I/O bound threads and New GIL

2010-03-28 Thread Ram Rachum

Changes by Ram Rachum cool...@cool-rr.com:


--
nosy: +cool-RR

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7946
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



  1   2   >