Daniel Ruoso wrote:
Hi,

The threading model topic still needs lots of thinking, so I decided to
try out some ideas.

Every concurrency model has its advantages and drawbacks, I've been
wondering about this ideas for a while now and I think I finally have a
sketch. My primary concerns were:

 1 - It can't require locking: Locking is just not scalable;
 2 - It should perform better with lots of cores even if it suffers
     when you have only a few;
3 - It shouldn't require complicated memory management techniques that will make it difficult to bind native libraries (yes, STM is damn hard); 4 - It should suport implicit threading and implicit event-based programming (i.e. the feed operator);
 5 - It must be easier to use then Perl 5 shared variables;
6 - It can't use a Global Interpreter Lock (that already said in 1, but, as this is a widely accepted idea in some other environments,
     I thought it would be better to make it explicit).

The idea I started was that every object has an "owner" thread, and only
that "thread" should talk to it, and I ended up with the following,
comments are appreciated:


comments? ideas?



Before discussing the implementation, I think it's worth while stating what it is that you are attempting to abstract. For example, is the abstraction intended for a mapping down to a GPU (e.g. OpenCL) with a hierarchical address space, or is it intended for a multicore CPU with linear address space, or is it intended to abstract a LAN, with communication via sockets (reliable TCP? unreliable UDP?), or is it intended to abstract the internet/cloud? Are you thinking in terms of streaming computation where throughput is dominant, or interacting agents where latency is the critical metric?

I'm not sure that it makes sense to talk of a single abstraction that supports all of those environments. However, there may be bunch of abstractions that can be combined in different ways.

"object belongs to thread" can have two interpretations: one is that the object-thread binding lasts for the life of the object; the other is that a client that wishes to use an object must request ownership, and wait to be granted (in some scenarios, the granting of ownership would require the thread to migrate to the physical processor that owns the state). In many cases, we might find that specific object-state must live in specific "places", but not all of the state that is encapsulated by an object lives in the same place.

Often, an object will encapsulate state that is, itself, accessed via objects. If a model requires delegated access to owned state to be passed through an intermediate object then this may imply significant overhead. A better way to think about such scenarios may be that a client would request access to a subset of methods -- and thus we have "role belongs to thread", not "object belongs to thread". One could imagine that a FIFO object might have a "put" role and a "get" role that producer/consumer clients would (temporarily) own while using (note that granting of ownership may imply arbitration, and later forced-revocation if the resource-ownership is not released/extended before some timeout expires). It may be wrong to conflate "role" as a unit of reuse with "role" as an owned window onto a subset of an object's methods.

Perl6 has a set of language primitives to support various aspects of concurrency. It is indeed interesting to consider how these map ot vastly difference computation platforms: OpenCl Vs OpenMP Vs Cloud. It deeps a little premature to be defining roles (e.g. RemoteInvocation) without defining the mapping of the core operators to these various models of computation.


Dave.

Reply via email to