Aaron Denney wrote:
On 2008-09-17, Jonathan Cast [EMAIL PROTECTED] wrote:
In my mind pooling vs new-creation is only relevant to process vs
thread in the performance aspects.
Say what? This discussion is entirely about performance --- does
CPython actually have the ability to scale
Brandon S. Allbery KF8NH ha scritto:
On Sep 18, 2008, at 15:10 , Manlio Perillo wrote:
Allocation areas are per-CPU, not per-thread. A Concurrent Haskell
thread consists of a TSO (thread state object, currently 11 machine
words), and a stack, which we currently start with 1KB and grow on
On 2008 Sep 19, at 17:14, Manlio Perillo wrote:
Brandon S. Allbery KF8NH ha scritto:
There are two ways to handle a growable stack; both start with
allocating each stack in a separate part of the address space with
room to grow it downward. The simpler way uses stack probes on
function
Jonathan Cast wrote:
On Wed, 2008-09-17 at 13:44 -0700, Evan Laforge wrote:
systems that don't use an existing user-space thread library (such as
Concurrent Haskell or libthread [1]) emulate user-space threads by
keeping a pool of processors and re-using them (e.g., IIUC Apache does
this).
On Thu, 2008-09-18 at 10:33 +0100, Simon Marlow wrote:
Jonathan Cast wrote:
An OS thread (Linux/Plan 9) stores:
* Stack (definitely a stack pointer and stored registers ( 40 bytes on
i686) and includes a special set of page tables on Plan 9)
* FD set (even if it's the same as the
Simon Marlow ha scritto:
Jonathan Cast wrote:
On Wed, 2008-09-17 at 13:44 -0700, Evan Laforge wrote:
systems that don't use an existing user-space thread library (such as
Concurrent Haskell or libthread [1]) emulate user-space threads by
keeping a pool of processors and re-using them (e.g.,
On Sep 18, 2008, at 15:10 , Manlio Perillo wrote:
Allocation areas are per-CPU, not per-thread. A Concurrent Haskell
thread consists of a TSO (thread state object, currently 11 machine
words), and a stack, which we currently start with 1KB and grow on
demand.
How is this implemented?
I
On 2008-09-17, Arnar Birgisson [EMAIL PROTECTED] wrote:
Hi Manlio and others,
On Wed, Sep 17, 2008 at 14:58, Manlio Perillo [EMAIL PROTECTED] wrote:
http://www.heise-online.co.uk/open/Shuttleworth-Python-needs-to-focus-on-future--/news/111534
cloud computing, transactional memory and future
On Wed, 2008-09-17 at 18:40 +, Aaron Denney wrote:
On 2008-09-17, Arnar Birgisson [EMAIL PROTECTED] wrote:
Hi Manlio and others,
On Wed, Sep 17, 2008 at 14:58, Manlio Perillo [EMAIL PROTECTED] wrote:
On 2008-09-17, Jonathan Cast [EMAIL PROTECTED] wrote:
On Wed, 2008-09-17 at 18:40 +, Aaron Denney wrote:
On 2008-09-17, Arnar Birgisson [EMAIL PROTECTED] wrote:
Hi Manlio and others,
On Wed, Sep 17, 2008 at 14:58, Manlio Perillo [EMAIL PROTECTED] wrote:
systems that don't use an existing user-space thread library (such as
Concurrent Haskell or libthread [1]) emulate user-space threads by
keeping a pool of processors and re-using them (e.g., IIUC Apache does
this).
Your response seems to be yet another argument that processes are too
On Wed, 2008-09-17 at 20:29 +, Aaron Denney wrote:
On 2008-09-17, Jonathan Cast [EMAIL PROTECTED] wrote:
On Wed, 2008-09-17 at 18:40 +, Aaron Denney wrote:
On 2008-09-17, Arnar Birgisson [EMAIL PROTECTED] wrote:
Hi Manlio and others,
On Wed, Sep 17, 2008 at 14:58, Manlio
On 2008 Sep 17, at 16:44, Evan Laforge wrote:
The fast context switching part seems orthogonal to me. Why is it
that getting the OS involved for context switches kills the
performance? Is it that the ghc RTS can switch faster because it
knows more about the code it's running (i.e. the OS
On Wed, 2008-09-17 at 13:44 -0700, Evan Laforge wrote:
systems that don't use an existing user-space thread library (such as
Concurrent Haskell or libthread [1]) emulate user-space threads by
keeping a pool of processors and re-using them (e.g., IIUC Apache does
this).
Your response
On 2008-09-17, Jonathan Cast [EMAIL PROTECTED] wrote:
In my mind pooling vs new-creation is only relevant to process vs
thread in the performance aspects.
Say what? This discussion is entirely about performance --- does
CPython actually have the ability to scale concurrent programs to
Hi Aaron,
On Wed, Sep 17, 2008 at 23:20, Aaron Denney [EMAIL PROTECTED] wrote:
I entered the discussion as which model is a workaround for the other --
someone said processes were a workaround for the lack of good threading
in e.g. standard CPython. I replied that most languages thread
jonathanccast:
The fact that people use thread-pools
I don't think people use thread-pools with Concurrent Haskell, or with
libthread.
Sure. A Chan with N worker forkIO threads taking jobs from a queue is a
useful idiom I've employed on occasion.
-- Don
On Wed, 2008-09-17 at 23:42 +0200, Arnar Birgisson wrote:
On Wed, Sep 17, 2008 at 23:20, Aaron Denney [EMAIL PROTECTED] wrote:
The central aspect in my mind is a default share-everything, or
default share-nothing.
[..snip...]
These are, in fact, process models. They are implemented on
On Wed, 2008-09-17 at 21:20 +, Aaron Denney wrote:
On 2008-09-17, Jonathan Cast [EMAIL PROTECTED] wrote:
In my mind pooling vs new-creation is only relevant to process vs
thread in the performance aspects.
Say what? This discussion is entirely about performance --- does
CPython
On 2008-09-17, Arnar Birgisson [EMAIL PROTECTED] wrote:
Hi Aaron,
On Wed, Sep 17, 2008 at 23:20, Aaron Denney [EMAIL PROTECTED] wrote:
I entered the discussion as which model is a workaround for the other
-- someone said processes were a workaround for the lack of good
threading in e.g.
Jonathan Cast ha scritto:
[...]
Huh. I see multi-threading as a workaround for expensive processes,
which can explicitly use shared memory when that makes sense.
That breaks down when you want 1000s of threads. I'm not aware of any
program, on any system, that spawns a new process on each
Quoth Jonathan Cast [EMAIL PROTECTED]:
...
| Say what? This discussion is entirely about performance --- does
| CPython actually have the ability to scale concurrent programs to
| multiple processors?
Well, ostensibly the discussion also has something to do with Haskell.
On that premise, may I
It may be of interest that although Erlang has been doing
lightweight concurrency for 20 years,
- you can choose whether you want to use an SMP version that
has as many schedulers as there are cores (plus internal
locking as needed) or a non-SMP version with one scheduler
(and no
23 matches
Mail list logo