[perl6/specs] 8b6d0b: Elaborate a bit on exit, END blocks and threads

2014-08-13 Thread GitHub
  Branch: refs/heads/master
  Home:   https://github.com/perl6/specs
  Commit: 8b6d0bb3fef29ad61540730c8ff99b5c69c99709
  
https://github.com/perl6/specs/commit/8b6d0bb3fef29ad61540730c8ff99b5c69c99709
  Author: Elizabeth Mattijsen l...@dijkmat.nl
  Date:   2014-08-11 (Mon, 11 Aug 2014)

  Changed paths:
M S29-functions.pod

  Log Message:
  ---
  Elaborate a bit on exit, END blocks and threads




Re: Lessons to learn from ithreads (was: threads?)

2010-12-05 Thread Joshua ben Jore
On Tue, Oct 12, 2010 at 3:46 PM, Tim Bunce tim.bu...@pobox.com wrote:
 On Tue, Oct 12, 2010 at 03:42:00PM +0200, Leon Timmermans wrote:
 On Mon, Oct 11, 2010 at 12:32 AM, Ben Goldberg ben-goldb...@hotmail.com 
 wrote:
  If thread-unsafe subroutines are called, then something like ithreads
  might be used.

 For the love of $DEITY, let's please not repeat ithreads!

 It's worth remembering that ithreads are far superior to the older
 5005threads model, where multiple threads ran with an interpreter.
 [Shudder]

 It's also worth remembering that real O/S level threads are needed to
 work asynchronously with third-party libraries that would block.
 Database client libraries that don't offer async support are an
 obvious example.

Hi,
I'm showing up only because I happened to check my Perl 6 inbox. For
various work related reasons, I'd peeked my head into a couple other
language VMs threading implementations. Seems relevant to mention more
possibilities:

ruby-1.8:
- green threads
- single actual process
- scheduling handled by switching to something else after N
opcodes are dispatched
- system() and other blocking system calls are implemented as
non-blocking alternatives
- C extensions must also use non-blocking code and be written
to call back to the scheduling core
- able to share data easily without onerous user-level
synchronization because there's really no such thing as being
concurrent

ruby-1.9 + python:
- real threads
- global interpreter lock over the core so they're not CPU concurrent
- don't have the story for C extensions
- able to share data easily without onerous user-level
synchronization because there's really no such thing as being
concurrent

jruby:
- Java
- real threads
- fully concurrent
- mostly can't use C extensions
- able to share data easily without onerous user-level
synchronization because ... ? Java magic?

Those implementations all do very well for tasks where actual CPU
concurrency isn't needed. A common sweet spot are web services and
other things that divide time over IO waiting. They also do well by
not having to instantiate separable VMs per thread. They also don't
require the user (as in Perl 5) to carefully mark their data as shared
because by default everything is shared (but then they don't have
actual concurrency either).

They do poorly when expected to take advantage of multiple cores. When
using this kind of concurrent software for web services, I've
compensated by just running enough processes to keep my CPUs busy. I
got the advantage of having something that behaved like threads but
was extremely easy to work with. This is very unlike my experience
with Perl 5 threads which I still fear to work with (mostly because I
worry of dangling pointers from difficult to spot miscellaneous magic
attachments).

Our own threading story could use tricks from the above and include
more than just what you've mentioned.

Josh


Re: Lessons to learn from ithreads (was: threads?)

2010-12-02 Thread Tim Bunce
On Fri, Oct 15, 2010 at 11:04:18AM +0100, Tim Bunce wrote:
 On Thu, Oct 14, 2010 at 11:52:00PM -0400, Benjamin Goldberg wrote:
  From: tim.bu...@pobox.com
 
  So I'd like to use this sub-thread to try to identify when lessons we
  can learn from ithreads. My initial thoughts are:
 
  - Don't clone a live interpreter.
  Start a new thread with a fresh interpreter.
 
  - Don't try to share mutable data or data structures.
  Use message passing and serialization.
 
 If the starting subroutine for a thread is reentrant, then no message 
  passing is needed,
 and the only serialization that might be needed is for the initial 
  arguments and for the
 return values (which will be gotten by the main thread via join).
 As for starting a new thread in a fresh interpreter, I think that it 
  might be necessary to
 populate that fresh interpreter with (copies of) data which is reachable 
  from the
 subroutine that the thread calls... reachability can probably be 
  identified by using
 the same technique the garbage collector uses.  This would provide an 
  effect similar to
 ithreads, but only copying what's really needed.
 To minimize copying, we would only treat things as reachable when we 
  have to -- for
 example, if there's no eval-string used a given sub, then the sub only 
  reaches those
 scopes (lexical and global) which it actually uses, not every scope that 
  it could use.
 
 Starting an empty interpreter, connected to the parent by some
 'channels', is simple to understand, implement and test.

I was recently reminded of the ongoing formal specification of Web Workers,
which fits that description quite well:

http://www.whatwg.org/specs/web-workers/current-work/

Since no one seems to have mentioned it in the thread I thought I would.

From the intro:

This specification defines an API for running scripts in the background
independently of any user interface scripts.

This allows for long-running scripts that are not interrupted by scripts
that respond to clicks or other user interactions, and allows long tasks
to be executed without yielding to keep the page responsive.

Workers (as these background scripts are called herein) are relatively
heavy-weight, and are not intended to be used in large numbers. For
example, it would be inappropriate to launch one worker for each pixel
of a four megapixel image. The examples below show some appropriate uses
of workers.

Generally, workers are expected to be long-lived, have a high start-up
performance cost, and a high per-instance memory cost.

Tim.


 In contrast, I suspect the kind of partial-cloning you describe above
 would be complex, hard to implement, hard to test, and fragile to use.
 It is, for example, more complex than ithreads, so the long history of
 ithreads bugs should server as a warning.
 
 I'd rather err on the side of simplicity.
 
 Tim.
 


Re: Ruby Fibers (was: threads?)

2010-11-07 Thread Mark J. Reed
On Fri, Oct 15, 2010 at 10:22 AM, B. Estrade estr...@gmail.com wrote:
 Pardon my ignorance, but are continuations the same thing as
 co-routines, or is it more primitive than that?

Continuations are not the same thing as coroutines, although they can
be used to implement coroutines - in fact, continuations can be used
to implement any sort of flow control whatsoever, because they are a
way of generalizing flow control.  Goto's, function calls, coroutines,
setjmp/longjmp, loops, exception throwing and catching - these and
more can all be regarded as special cases of continuation
manipulation.

A continuation is just a snapshot of a point in a program's run, which
can then be 'called' later to return control to that point.  The
entire execution context is preserved, so you can call the same
continuation multiple times, re-enter a function that has already
returned, etc.   But state changes are not undone, so the program can
still behave differently after the continuation is called.

-- 
Mark J. Reed markjr...@gmail.com


Re: threads?

2010-10-24 Thread Christian Mueller
I would implement threads in the following form

$thread_counter = 0;
$global = lock;

$thread = new thread( \thread_sub );
$thread-start();

thread_sub {
lock( $global ) {
print i'm thread , ++$thread_counter, \n;
}
}

It's a mixture of ithreads and the C# threading model. The thread works in
the same interpreter. You have to do locking by yourself. That would make
it light weighted and gives you the power to do everything you want. I
don't think that normal threads are very difficult to understand. But it
gives the highest flexibility.




Re: threads?

2010-10-22 Thread Aaron Sherman
On Thu, Oct 21, 2010 at 6:04 PM, Darren Duncan dar...@darrenduncan.netwrote:

 Aaron Sherman wrote:



 Things that typically precipitate threading in an application:

   - Blocking IO
   - Event management (often as a crutch to avoid asynchronous code)
   - Legitimately parallelizable, intense computing

 Interestingly, the first two tend to be where most of the need comes from
 and the last one tends to be what drives most discussion of threading.



 The last one in particular would legitimately get attention when one
 considers that it is for this that the concern about using multi-core
 machines efficiently comes into play.


That sounds great, but what's the benefit to a common use case? Sorting
lists with higher processor overhead and waste heat in applications that
traditionally weren't processor-bound in the first place?

Over the past 20+ years, I've seen some very large, processor-bound
applications that could (and in some cases, did) benefit from threading over
multiple cores. However, they were so far in the minority as to be nearly
invisible, and in many cases such applications can simply be run multiple
times per host in order to VERY efficiently consume every available
processor.

The vast majority of my computing experience has been in places where I'm
actually willing to use Perl, a grossly inefficient language (I say this,
coming as I do from C, not in comparison to other HLLs), because my
performance concerns are either non-existent or related almost entirely to
non-trivial IO (i.e. anything sendfile can do).


  The first 2 are more about lowering latency and appearing responsive to a
 user on a single core machine.


Write me a Web server, and we'll talk. Worse, write a BitTorrent client that
tries to store its results into a high performance, local datastore without
reducing theoretical, back-of-the-napkin throughput by a staggering amount.
Shockingly enough, neither of these frequently used examples are
processor-bound.

The vast majority of today's applications are written with network
communications in mind to one degree or another. The user, isn't so much
interesting as servicing network and disk IO responsively enough that
hardware and network protocol stacks wait on you to empty or fill a buffer
as infrequently as possible. This is essential in such rare circumstances
as:

   - Database intensive applications
   - Moving large data files across wide area networks
   - Parsing and interpreting highly complex languages inline from
   data received over multiple, simultaneous network connections (sounds like
   this should be rare, but your browser does it every time you click on a
   link)

Just in working with Rakudo, I have to use git, make and Perl itself, all of
which can improve CPU performance all they like, but will ultimately run
slow if they don't handle reading dozens of files, possibly from multiple IO
devices (disks, network filesystems, remote repositories, etc) as
responsively as possible.

Now, to back up and think this through, there is one place where multi-core
processor usage is going to become critical over the next few years: phones.
Android-based phones are going multi-core within the next six months. My
money is on a multi-core iPhone within a year. These platforms are going to
need to take advantage of multiple cores for primarily single-application
performance in a low-power environment.

So, I don't want you to think that I'm blind to the need you describe. I
just don't want you to be unrealistic about the application balance out
there.


 I think that Perl 6's implicit multi-threading approach such as for
 hyperops or junctions is a good best first choice to handle many common
 needs, the last list item above, without users having to think about it.
  Likewise any pure functional code. -- Darren Duncan


It's very common for people working on the design or implementation of a
programming language to become myopic with respect to the importance of
executing code as quickly as possible, and I'm not faulting anyone for that.
It's probably a good thing in most circumstances, but in this case, assuming
that the largest need is going to be the execution of code turns out to be a
misleading instinct. Computers execute code far, far less than you would
expect, and the cost of failing to service events is often orders of
magnitude greater than the cost of spending twice the number of cycles doing
so.

PS: Want an example of how important IO is? Google has their own multi-core
friendly network protocol modifications to Linux that have been pushed out
in the past 6 months:

http://www.h-online.com/open/features/Kernel-Log-Coming-in-2-6-35-Part-3-Network-support-1040736.html

They had to do this because single cores can no longer keep up with the
network.


Re: threads?

2010-10-21 Thread Aaron Sherman
I've done quite a lot of concurrent programming over the past 23ish years,
from the implementation of a parallelized version of CLIPS back in the late
80s to many C, Perl, and Python projects involving everything from shared
memory to process pooling to every permutation of hard and soft thread
management. To say I'm rusty, however, would be an understatement, and I'm
sure my information is sorely out of date.

What I can contribute to such a conversation, however, is this:

   - Make the concept of process and thread an implementation detail
   rather than separate worlds and your users won't learn to fear one or the
   other.
   - If the programmer has to think about semaphore management, there's
   already a problem.
   - If the programmer's not allowed to think about semaphore management,
   there's already a problem.
   - Don't paint yourself into a corner when it comes to playing nice with
   local interfaces.
   - If your idea of instantiating a thread involves creating a on OS VM,
   then you're probably lighter weight than Python's threading model, but I'd
   suggesting parring it down some more. It's thread, not ringworld (I was
   going to say not 'space elevator,' but it seemed insufficient to the
   examples I've seen).


I know that's pretty high-level, but it's what I've got. I think I wrote my
last threaded application in 2007.


Re: threads?

2010-10-21 Thread Aaron Sherman
On Tue, Oct 12, 2010 at 10:22 AM, Damian Conway dam...@conway.org wrote:


 Perhaps we need to think more Perlishly and reframe the entire question.
 Not: What threading model do we need?, but: What kinds of non-sequential
 programming tasks do we want to make easy...and how would we like to be
 able to specify those tasks?


Things that typically precipitate threading in an application:

   - Blocking IO
   - Event management (often as a crutch to avoid asynchronous code)
   - Legitimately parallelizable, intense computing

Interestingly, the first two tend to be where most of the need comes from
and the last one tends to be what drives most discussion of threading.

Perhaps it would make more sense to discuss Perl 6's event model (glib,
IMHO, is an excellent role model, here --
http://en.wikipedia.org/wiki/Event_loop#GLib_event_loop ) and async IO model
before we deal with how to sort a list on 256 cores...


Re: threads?

2010-10-21 Thread Darren Duncan

Aaron Sherman wrote:

On Tue, Oct 12, 2010 at 10:22 AM, Damian Conway dam...@conway.org wrote:

Perhaps we need to think more Perlishly and reframe the entire question.
Not: What threading model do we need?, but: What kinds of non-sequential
programming tasks do we want to make easy...and how would we like to be
able to specify those tasks?


Things that typically precipitate threading in an application:

   - Blocking IO
   - Event management (often as a crutch to avoid asynchronous code)
   - Legitimately parallelizable, intense computing

Interestingly, the first two tend to be where most of the need comes from
and the last one tends to be what drives most discussion of threading.

Perhaps it would make more sense to discuss Perl 6's event model (glib,
IMHO, is an excellent role model, here --
http://en.wikipedia.org/wiki/Event_loop#GLib_event_loop ) and async IO model
before we deal with how to sort a list on 256 cores...


The last one in particular would legitimately get attention when one considers 
that it is for this that the concern about using multi-core machines efficiently 
comes into play.  The first 2 are more about lowering latency and appearing 
responsive to a user on a single core machine.  I think that Perl 6's implicit 
multi-threading approach such as for hyperops or junctions is a good best first 
choice to handle many common needs, the last list item above, without users 
having to think about it.  Likewise any pure functional code. -- Darren Duncan


Re: threads?

2010-10-18 Thread Tim Bunce
On Sun, Oct 17, 2010 at 01:18:09AM +0200, Carl Mäsak wrote:
 Damian (), Matt ():
  Perhaps we need to think more Perlishly and reframe the entire question.
  Not: What threading model do we need?, but: What kinds of non-sequential
  programming tasks do we want to make easy...and how would we like to be
  able to specify those tasks?
 
  I watched a presentation by Guy Steele at the Strange Loop conference on
  Thursday where he talked about non-sequential programming.  One of the
  interesting things that he mentioned was to use the algebraic properties of 
  an
  operation to know when a large grouping of operations can be done 
  non-sequentially.
  For example, we know that the meta reduction operator could take very large 
 lists
  and split them into smaller lists across all available cores when 
  performing certain
  operations, like addition and multiplication.  If we could mark new 
  operators that
  we create with this knowledge we could do this for custom operators too.  
  This isn't
  a new idea, but it seems like it would be a helpful tool in simplifying 
  non-sequential
  programming and I didn't see it mentioned in this thread yet.
 
 This idea seems to be in the air somehow. (Even though all copies of
 the meme might have its roots in that Guy you mention.)
 
  http://irclog.perlgeek.de/perl6/2010-10-15#i_2914961
 
 Perl 6 has all the prerequisites for making this happen. It's mostly a
 question of marking things up with some trait or other.
 
 our multi sub infix:+($a, $b) will optimizeassociativity {
 ...
 }
 
 (User-defined ops can be markes in exactly the same way.)
 
 All that's needed after that is a reduce sub that's sensitive to such
 traits. Oh, and threads.

Minimizing the overhead of such a mechanism would be crucial to making
it beneficial for use on non-massive data sets.

For this kind of thing to work well we'd need to have multiple threads
able to work in a single interpreter.

Tim.


RE: Lessons to learn from ithreads (was: threads?)

2010-10-16 Thread Benjamin Goldberg

 Date: Tue, 12 Oct 2010 23:46:48 +0100
 From: tim.bu...@pobox.com
 To: faw...@gmail.com
 CC: ben-goldb...@hotmail.com; perl6-language@perl.org
 Subject: Lessons to learn from ithreads (was: threads?)
 
 On Tue, Oct 12, 2010 at 03:42:00PM +0200, Leon Timmermans wrote:
  On Mon, Oct 11, 2010 at 12:32 AM, Ben Goldberg ben-goldb...@hotmail.com 
  wrote:
   If thread-unsafe subroutines are called, then something like ithreads
   might be used.
  
  For the love of $DEITY, let's please not repeat ithreads!
 
 It's worth remembering that ithreads are far superior to the older
 5005threads model, where multiple threads ran with an interpreter.
 [Shudder]
 
 It's also worth remembering that real O/S level threads are needed to
 work asynchronously with third-party libraries that would block.
 Database client libraries that don't offer async support are an
 obvious example.
 
 I definitely agree that threads should not be the dominant form of
 concurrency, and I'm certainly no fan of working with O/S threads.
 They do, however, have an important role and can't be ignored.
 
 So I'd like to use this sub-thread to try to identify when lessons we
 can learn from ithreads. My initial thoughts are:
 
 - Don't clone a live interpreter.
 Start a new thread with a fresh interpreter.
 
 - Don't try to share mutable data or data structures.
 Use message passing and serialization.
 
 Tim.

If the starting subroutine for a thread is reentrant, then no message passing 
is needed,and the only serialization that might be needed is for the initial 
arguments and for thereturn values (which will be gotten by the main thread via 
join).
As for starting a new thread in a fresh interpreter, I think that it might be 
necessary topopulate that fresh interpreter with (copies of) data which is 
reachable from thesubroutine that the thread calls... reachability can probably 
be identified by usingthe same technique the garbage collector uses.  This 
would provide an effect similar toithreads, but only copying what's really 
needed.
To minimize copying, we would only treat things as reachable when we have to -- 
forexample, if there's no eval-string used a given sub, then the sub only 
reaches thosescopes (lexical and global) which it actually uses, not every 
scope that it could use.
  

Re: Lessons to learn from ithreads (was: threads?)

2010-10-16 Thread Tim Bunce
On Thu, Oct 14, 2010 at 11:52:00PM -0400, Benjamin Goldberg wrote:
 From: tim.bu...@pobox.com

 So I'd like to use this sub-thread to try to identify when lessons we
 can learn from ithreads. My initial thoughts are:

 - Don't clone a live interpreter.
 Start a new thread with a fresh interpreter.

 - Don't try to share mutable data or data structures.
 Use message passing and serialization.

If the starting subroutine for a thread is reentrant, then no message 
 passing is needed,
and the only serialization that might be needed is for the initial 
 arguments and for the
return values (which will be gotten by the main thread via join).
As for starting a new thread in a fresh interpreter, I think that it might 
 be necessary to
populate that fresh interpreter with (copies of) data which is reachable 
 from the
subroutine that the thread calls... reachability can probably be 
 identified by using
the same technique the garbage collector uses.  This would provide an 
 effect similar to
ithreads, but only copying what's really needed.
To minimize copying, we would only treat things as reachable when we have 
 to -- for
example, if there's no eval-string used a given sub, then the sub only 
 reaches those
scopes (lexical and global) which it actually uses, not every scope that 
 it could use.

Starting an empty interpreter, connected to the parent by some
'channels', is simple to understand, implement and test.

In contrast, I suspect the kind of partial-cloning you describe above
would be complex, hard to implement, hard to test, and fragile to use.
It is, for example, more complex than ithreads, so the long history of
ithreads bugs should server as a warning.

I'd rather err on the side of simplicity.

Tim.


Re: Ruby Fibers (was: threads?)

2010-10-16 Thread B. Estrade
On Fri, Oct 15, 2010 at 09:57:26AM -0400, Mark J. Reed wrote:
 On Fri, Oct 15, 2010 at 7:42 AM, Leon Timmermans faw...@gmail.com wrote:
  Continuations and fibers are incredibly useful and should be easy to
  implement on parrot/rakudo but they aren't really concurrency. They're
  a solution to a different problem.
 
 I would argue that concurrency isn't a problem to solve; it's one form
 of solution to the problem of maximizing efficiency.
 Continuations/fibers and asynchronous event loops are  different
 solutions to the same problem.

Pardon my ignorance, but are continuations the same thing as 
co-routines, or is it more primitive than that? Also, doesn't this
really just allow context switching outside of the knowledge of a
kernel thread, thus allowing one to implement tasks at the user level?

Concurrency can apply to a lot of different things, but the problem is
now not only implementing an algorithm concurrently but also using the
concurrency available in the hardware efficiently.

Brett

 
 
 
 
 
 
 
 -- 
 Mark J. Reed markjr...@gmail.com

-- 
B. Estrade estr...@gmail.com


Re: Lessons to learn from ithreads (was: threads?)

2010-10-16 Thread Tim Bunce
Earlier, Leon Timmermans wrote:
: * Code sharing is actually quite nice. Loading Moose separately in a  
  
: hundred threads is not. This is not trivial though, Perl being so 
  
: dynamic. I suspect this is not possible without running into the same 
  
: issues as ithreads does.  
  

On Fri, Oct 15, 2010 at 01:22:10PM +0200, Leon Timmermans wrote:
 On Wed, Oct 13, 2010 at 1:13 PM, Tim Bunce tim.bu...@pobox.com wrote:
  If you wanted to start a hundred threads in a language that has good
  support for async constructs you're almost certainly using the wrong
  approach. In the world of perl6 I expect threads to be used rarely and
  for specific unavoidably-bocking tasks, like db access, and where true
  concurrency is needed.
 
 I agree starting a large number of threads is usually the wrong
 approach, but at the same time I see more reasons to use threads than
 just avoiding blocking. We live in a multicore world, and it would be
 nice if it was easy to actually use those cores. I know people who are
 deploying to 24 core systems now, and that number will only grow.
 Processes shouldn't be the only way to utilize that.

We certainly need to be able to make good use of multiple cores.

As I mentioned earlier, we should aim to be able to reuse shared pages
of readonly bytecode and jit-compiled subs. So after a module is loaded
into one interpreter it should be much cheaper to load it into others.
That's likely to be a much simpler/safer approach than trying to clone
interpreters.

Another important issue here is portability of concepts across
implementations of perl6. I'd guess that starting a thread with a fresh
interpreter is likely to be supportable across more implementations than
starting a thread with cloned interpreter.

Also, if we do it right, it shouldn't make much difference if the new
interpreter is just a new thread or also a new process (perhaps even on
a different machine). The IPC should be sufficiently abstracted to just work.

  (Adding thread/multiplicity support to NYTProf shouldn't be too hard.
  I don't have the time/inclination to do it at the moment, but I'll fully
  support anyone who has.)
 
 I hate how you once again make my todo list grow :-p

Well volunteered!  ;)

Tim.


Re: Lessons to learn from ithreads (was: threads?)

2010-10-16 Thread Tyler Curtis
On Fri, Oct 15, 2010 at 10:56 AM, Tim Bunce tim.bu...@pobox.com wrote:
...
 Another important issue here is portability of concepts across
 implementations of perl6. I'd guess that starting a thread with a fresh
 interpreter is likely to be supportable across more implementations than
 starting a thread with cloned interpreter.

...

 Well volunteered!  ;)

 Tim.


Hi, I don't have much to offer on the topic of concurrency, but as
someone who is in the process of slowly implementing a native-ish code
compiler for Perl 6 (technically probably a compiler to LLVM assembly
with the intention of then compiling to native code), I'd like to
remind everyone that not every implementation will have an
interpreter. I don't think you actually necessarily mean an
interpreter here, but rather whatever structure is analogous to that
which, in an interpreter, would hold the interpreter's global state.
If this is the case, I think it may be helpful to state more precisely
what state you think would need to be cloned or recreated between
threads or processes and what would not.

Also, it is important to consider how different designs will affect
the complexity and performance of concurrency primitives for Perl 6
implementations (especially for more common implementation
strategies), but neither interpreter nor VM appears in a quick
grepping of S17. I don't think that should change.

--
Tyler Curtis


Re: threads?

2010-10-16 Thread Carl Mäsak
Damian (), Matt ():
 Perhaps we need to think more Perlishly and reframe the entire question.
 Not: What threading model do we need?, but: What kinds of non-sequential
 programming tasks do we want to make easy...and how would we like to be
 able to specify those tasks?

 I watched a presentation by Guy Steele at the Strange Loop conference on
 Thursday where he talked about non-sequential programming.  One of the
 interesting things that he mentioned was to use the algebraic properties of an
 operation to know when a large grouping of operations can be done 
 non-sequentially.
 For example, we know that the meta reduction operator could take very large 
lists
 and split them into smaller lists across all available cores when performing 
 certain
 operations, like addition and multiplication.  If we could mark new operators 
 that
 we create with this knowledge we could do this for custom operators too.  
 This isn't
 a new idea, but it seems like it would be a helpful tool in simplifying 
 non-sequential
 programming and I didn't see it mentioned in this thread yet.

This idea seems to be in the air somehow. (Even though all copies of
the meme might have its roots in that Guy you mention.)

 http://irclog.perlgeek.de/perl6/2010-10-15#i_2914961

Perl 6 has all the prerequisites for making this happen. It's mostly a
question of marking things up with some trait or other.

our multi sub infix:+($a, $b) will optimizeassociativity {
...
}

(User-defined ops can be markes in exactly the same way.)

All that's needed after that is a reduce sub that's sensitive to such
traits. Oh, and threads.

// Carl


Re: threads?

2010-10-16 Thread Matt Follett
On Oct 12, 2010, at 9:22 AM, Damian Conway wrote:

 Perhaps we need to think more Perlishly and reframe the entire question.
 Not: What threading model do we need?, but: What kinds of non-sequential
 programming tasks do we want to make easy...and how would we like to be
 able to specify those tasks?

I watched a presentation by Guy Steele at the Strange Loop conference on 
Thursday where he talked about non-sequential programming.  One of the 
interesting things that he mentioned was to use the algebraic properties of an 
operation to know when a large grouping of operations can be done 
non-sequentially.  For example, we know that the meta reduction operator could 
take very large lists and split them into smaller lists across all available 
cores when performing certain operations, like addition and multiplication.  If 
we could mark new operators that we create with this knowledge we could do this 
for custom operators too.  This isn't a new idea, but it seems like it would be 
a helpful tool in simplifying non-sequential programming and I didn't see it 
mentioned in this thread yet.

Here are the slides to the talk to which I'm referring:
http://strangeloop2010.com/talk/presentation_file/14299/GuySteele-parallel.pdf

~Matt

Re: Lessons to learn from ithreads (was: threads?)

2010-10-15 Thread Leon Timmermans
On Wed, Oct 13, 2010 at 1:13 PM, Tim Bunce tim.bu...@pobox.com wrote:
 If you wanted to start a hundred threads in a language that has good
 support for async constructs you're almost certainly using the wrong
 approach. In the world of perl6 I expect threads to be used rarely and
 for specific unavoidably-bocking tasks, like db access, and where true
 concurrency is needed.

I agree starting a large number of threads is usually the wrong
approach, but at the same time I see more reasons to use threads than
just avoiding blocking. We live in a multicore world, and it would be
nice if it was easy to actually use those cores. I know people who are
deploying to 24 core systems now, and that number will only grow.
Processes shouldn't be the only way to utilize that.

 (Adding thread/multiplicity support to NYTProf shouldn't be too hard.
 I don't have the time/inclination to do it at the moment, but I'll fully
 support anyone who has.)

I hate how you once again make my todo list grow :-p


Re: Ruby Fibers (was: threads?)

2010-10-15 Thread Leon Timmermans
On Wed, Oct 13, 2010 at 1:20 AM, Tim Bunce tim.bu...@pobox.com wrote:
 I've not used them, but Ruby 1.9 Fibers (continuations) and the
 EventMachine Reactor pattern seem interesting.

Continuations and fibers are incredibly useful and should be easy to
implement on parrot/rakudo but they aren't really concurrency. They're
a solution to a different problem.


Re: Ruby Fibers (was: threads?)

2010-10-15 Thread Mark J. Reed
On Fri, Oct 15, 2010 at 7:42 AM, Leon Timmermans faw...@gmail.com wrote:
 Continuations and fibers are incredibly useful and should be easy to
 implement on parrot/rakudo but they aren't really concurrency. They're
 a solution to a different problem.

I would argue that concurrency isn't a problem to solve; it's one form
of solution to the problem of maximizing efficiency.
Continuations/fibers and asynchronous event loops are  different
solutions to the same problem.







-- 
Mark J. Reed markjr...@gmail.com


Re: Ruby Fibers (was: threads?)

2010-10-15 Thread Stefan O'Rear
On Fri, Oct 15, 2010 at 01:42:06PM +0200, Leon Timmermans wrote:
 On Wed, Oct 13, 2010 at 1:20 AM, Tim Bunce tim.bu...@pobox.com wrote:
  I've not used them, but Ruby 1.9 Fibers (continuations) and the
  EventMachine Reactor pattern seem interesting.
 
 Continuations and fibers are incredibly useful and should be easy to
 implement on parrot/rakudo

Forget Parrot, fibers can be implemented in pure Perl 6.

module Fibers;

my @runq;

sub spawn(entry) is export {
push @runq, $( gather entry() );
}

sub yield() is export {
take True;
}

sub scheduler() is export {
while @runq {
my $task = shift @runq;
if $task {
$task.shift;
push @runq, $task;
}
}
}

-sorear


signature.asc
Description: Digital signature


Ruby Fibers (was: threads?)

2010-10-13 Thread Tim Bunce
On Tue, Oct 12, 2010 at 07:22:33AM -0700, Damian Conway wrote:
 
 What we really need is some anecdotal evidence from folks who are actually
 using threading in real-world situations (in *any* languages). What has worked
 in practice? What has worked well? What was painful? What was error-prone?
 And for which kinds of tasks?
 
 And we also need to stand back a little further and ask: is threading
 the right approach at all? Do threads work in *any* language? Are there
 better metaphors?

I've not used them, but Ruby 1.9 Fibers (continuations) and the
EventMachine Reactor pattern seem interesting.

http://www.igvita.com/2009/05/13/fibers-cooperative-scheduling-in-ruby/
http://www.igvita.com/2010/03/22/untangling-evented-code-with-ruby-fibers/

There's also an *excellent* screencast by Ilya Grigorik.
It's from a Ruby/Rails perspective but he gives a good explanation of
the issues. He shows how writing async code using callbacks rapidly gets
complex and how continuations can be used to avoid that.

Well worth a look:

http://blog.envylabs.com/2010/07/no-callbacks-no-threads-ruby-1-9/

Tim.

p.s. If short on time start at 15:00 and watch to at least 28:00.


Lessons to learn from ithreads (was: threads?)

2010-10-13 Thread Tim Bunce
On Tue, Oct 12, 2010 at 03:42:00PM +0200, Leon Timmermans wrote:
 On Mon, Oct 11, 2010 at 12:32 AM, Ben Goldberg ben-goldb...@hotmail.com 
 wrote:
  If thread-unsafe subroutines are called, then something like ithreads
  might be used.
 
 For the love of $DEITY, let's please not repeat ithreads!

It's worth remembering that ithreads are far superior to the older
5005threads model, where multiple threads ran with an interpreter.
[Shudder]

It's also worth remembering that real O/S level threads are needed to
work asynchronously with third-party libraries that would block.
Database client libraries that don't offer async support are an
obvious example.

I definitely agree that threads should not be the dominant form of
concurrency, and I'm certainly no fan of working with O/S threads.
They do, however, have an important role and can't be ignored.

So I'd like to use this sub-thread to try to identify when lessons we
can learn from ithreads. My initial thoughts are:

- Don't clone a live interpreter.
Start a new thread with a fresh interpreter.

- Don't try to share mutable data or data structures.
Use message passing and serialization.

Tim.


Re: threads?

2010-10-13 Thread Andy_Bach
I haven't enough smarts to see if this is at all what you're looking for 
but is used some of the same terms:

http://dpj.cs.uiuc.edu/DPJ/Home.html?cid=nl_ddjupdate_2010-10-12_html

Welcome to the home page for the Deterministic Parallel Java (DPJ) project 
at the University of Illinois at Urbana-Champaign.  
Project Overview
The broad goal of our project is to provide deterministic-by-default 
semantics for an object-oriented, imperative parallel language, using 
primarily compile-time checking.  ?Deterministic? means that the program 
produces the same visible output for a given input, in all executions.  
?By default? means that deterministic behavior is guaranteed unless the 
programmer explicitly requests nondeterminism.  This is in contrast to 
today?s shared-memory programming models (e.g., threads and locks), which 
are inherently nondeterministic and can even have undetected data races.  
Our paper at HotPar 2009 states our research goals in more detail.  The 
other pages of this site provide additional information about the DPJ type 
system and language.
a
--
Andy Bach
Systems Mangler
Internet: andy_b...@wiwb.uscourts.gov
Voice: (608) 261-5738; 
Cell: (608) 658-1890

No, no, you're not thinking, you're just being logical.
-Niels Bohr, physicist (1885-1962)

Re: threads?

2010-10-13 Thread B. Estrade
On Tue, Oct 12, 2010 at 10:43:44PM +0200, Leon Timmermans wrote:
 On Tue, Oct 12, 2010 at 4:22 PM, Damian Conway dam...@conway.org wrote:
  The problem is: while most people can agree on what have proved to be
  unsatisfactory threading models, not many people can seem to agree on
  what would constititute a satisfactory threading model (or, possibly, 
  models).
 
  What we really need is some anecdotal evidence from folks who are actually
  using threading in real-world situations (in *any* languages). What has 
  worked
  in practice? What has worked well? What was painful? What was error-prone?
  And for which kinds of tasks?
 
 Most languages either implement concurrency in a way that's not very
 useful (CPython, CRuby) or implement it in a way that's slightly
 (Java/C/C++) to totally (perl 5) insane. Erlang is the only language
 I've worked with whose threads I really like, but sadly it's rather
 weak at a lot of other things.
 
 In general, I don't feel that a shared memory model is a good fit for
 a high level language. I'm very much a proponent of message passing.
 Unlike shared memory, it's actually easier to do the right thing than
 not. Implementing it correctly and efficiently is not easier than
 doing a shared memory system though in my experience (I'm busy
 implementing it on top of ithreads; yeah I'm masochist like that).
 
  And we also need to stand back a little further and ask: is threading
  the right approach at all? Do threads work in *any* language? Are there
  better metaphors?
 
  Perhaps we need to think more Perlishly and reframe the entire question.
  Not: What threading model do we need?, but: What kinds of non-sequential
  programming tasks do we want to make easy...and how would we like to be
  able to specify those tasks?
 
 I agree. I would prefer implicit over explicit concurrency wherever possible.

I know you're speaking about the Perl interface to concurrency, but
you seem to contradict yourself because message passing is explicit
whereas shared memory is implicit - two different models, both of
which could be used together to implement a pretty flexible system.

It'd be a shame to not provide a way to both use threads directly or
to fallback to some implicitly concurrent constructs.

Brett

-- 
B. Estrade estr...@gmail.com


Re: threads?

2010-10-13 Thread B. Estrade
On Tue, Oct 12, 2010 at 07:22:33AM -0700, Damian Conway wrote:
 Leon Timmermans wrote:
 
  For the love of $DEITY, let's please not repeat ithreads!
 
 $AMEN!
 
 Backwards compatibility is not the major design criterion for Perl 6,
 so there's no need to recapitulate our own phylogeny here.
 
 The problem is: while most people can agree on what have proved to be
 unsatisfactory threading models, not many people can seem to agree on
 what would constititute a satisfactory threading model (or, possibly, models).
 
 What we really need is some anecdotal evidence from folks who are actually
 using threading in real-world situations (in *any* languages). What has worked
 in practice? What has worked well? What was painful? What was error-prone?
 And for which kinds of tasks?
 
 And we also need to stand back a little further and ask: is threading
 the right approach at all? Do threads work in *any* language? Are there
 better metaphors?

A more general metaphore would be asynchronous tasking, a thread being
a long running implicit task. Other issues include memory
consistency models, tasking granularity, scheduling, and flexible
synchronization options.

I am coming from the OpenMP world, so a lot of this falls on the
shoulders of the runtime - a clear strength of Perl IMHO. It may be
worth someone taking the time to read what the OpenMP spec has to say
about tasking as well as exploring tasking support on Chapel,
Fortress, X10, and Cilk. PGAS based languages may also offer some
inspirations as a potential alternative to threads or tasks. 

The only scriping language that I know that supports threading
natively is Qore. I've mentioned this before.

Perl's functional aspects also make it fairly easy to create
concurrency without the worry of side effects, but not everyone
is lucky enough to have a loosely coupled problem or not need i/o.

Now how to distill what's been learned in practice into a Perlish
approach?

 
 Perhaps we need to think more Perlishly and reframe the entire question.
 Not: What threading model do we need?, but: What kinds of non-sequential
 programming tasks do we want to make easy...and how would we like to be
 able to specify those tasks?

There are something like 12 HPC domains that have been identified,
all needing something a little different from the compiler, runtime,
and platform - these do not include things for which Perl is often
(ab)used.

 
 As someone who doesn't (need to) use threading to solve the kinds of
 problems I work on, I'm well aware that I'm not the right person to help
 in this design work. We need those poor souls who already suffer under
 threads to share their tales of constant misery (and their occasional
 moments of triumph) so we can identify successful patterns of use
 and steal^Wborg^Wborrow the very best available solutions.

Are you sure you couldn't use threading over shared memory? :)

Cheers,
Brett

 
 Damian

-- 
B. Estrade estr...@gmail.com


Re: Lessons to learn from ithreads (was: threads?)

2010-10-13 Thread Tim Bunce
On Wed, Oct 13, 2010 at 04:00:02AM +0200, Leon Timmermans wrote:
 On Wed, Oct 13, 2010 at 12:46 AM, Tim Bunce tim.bu...@pobox.com wrote:
  So I'd like to use this sub-thread to try to identify when lessons we
  can learn from ithreads. My initial thoughts are:
 
  - Don't clone a live interpreter.
     Start a new thread with a fresh interpreter.
 
  - Don't try to share mutable data or data structures.
     Use message passing and serialization.
 
 Actually, that sounds *exactly* like what I have been trying to
 implementing for perl 5 based on ithreads (threads::lite, it's still
 in a fairly early state though). My experience with it so far taught
 me that:
 
 * Serialization must be cheap for this to scale. For threads::lite
 this turns out to be the main performance bottleneck. Erlang gets away
 with this because it's purely functional and thus doesn't need to
 serialize between local threads, maybe we could do something similar
 with immutable objects. Here micro-optimizations are going to pay off.

Being able to optionally define objects as structures in contiguous
memory could be a useful optimization. Both for serialization and
general cache-friendly cpu performance. Just a thought.

 * Code sharing is actually quite nice. Loading Moose separately in a
 hundred threads is not. This is not trivial though, Perl being so
 dynamic. I suspect this is not possible without running into the same
 issues as ithreads does.

If you wanted to start a hundred threads in a language that has good
support for async constructs you're almost certainly using the wrong
approach. In the world of perl6 I expect threads to be used rarely and
for specific unavoidably-bocking tasks, like db access, and where true
concurrency is needed.

Also, it should be possible to share read-only bytecode and perhaps
read-only jit'd executable pages, to avoid the full cost of reloading
modules.

 * Creating a thread (/interpreter) should be as cheap as possible,
 both in CPU-time as in memory. Creating an ithread is relatively
 expensive, specially memorywise. You can't realistically create a very
 large number of them the way you can in Erlang.

Erlang has just the one kind of concurrency mechanism: the thread
so you need to create lots of them (which Erlang makes very cheap).

We're looking at two concurrency mechanisms for perl6: Heavy-weight
O/S thread+interpreter pairs (as above), and lightweight async
behaviours within a single interpreter (eg continuations/fibers).

Those lightweight mechanisms are most like Erlang threads. They'll be
cheap and plentiful, so they'll be far less need to start O/S threads.

Tim.

 Leon
 
 (well actually I learned a lot more; like about non-deterministic unit
 tests and profilers that don't like threads, but that's an entirely
 different story)

(Adding thread/multiplicity support to NYTProf shouldn't be too hard.
I don't have the time/inclination to do it at the moment, but I'll fully
support anyone who has.)


Re: threads?

2010-10-13 Thread B. Estrade
On Tue, Oct 12, 2010 at 02:31:26PM +0200, Carl M?sak wrote:
 Ben ():
  If perl6 can statically (at compile time) analyse subroutines and
  methods and determine if they're reentrant, then it could
  automatically use the lightest weight threads when it knows that the
  entry sub won't have side effects or alter global data.
 
 I'm often at the receiving end of this kind of reply, but...
 
 ...to a first approximation, I don't believe such analysis to be
 possible in Perl 6. Finding out whether something won't have side
 effects is tricky at best, squeezed in as we are between eval,
 exuberant dynamism, and the Halting Problem.

If one knows what variables are shared, some degree of side effect
potential can be determined. But yes, in general, a tough problem.

Brett

 
 // Carl

-- 
B. Estrade estr...@gmail.com


Re: threads? - better metaphors

2010-10-13 Thread Todd Olson

On 2010-Oct-12, at 10:22, Damian Conway wrote:

 What we really need is some anecdotal evidence from folks who are actually
 using threading in real-world situations (in *any* languages). What has worked
 in practice? What has worked well? What was painful? What was error-prone?
 And for which kinds of tasks?
 
 And we also need to stand back a little further and ask: is threading
 the right approach at all? Do threads work in *any* language? Are there
 better metaphors?


 'Channels are a good model of the external world' - Russ Cox
  Threads without Locks, slide 39

Perhaps the work on the 'channel' model done in Plan9 (and Inferno) will be 
helpful.
It has many years of experience in publicly available code, libraries, and 
discussion
archives, and verification tools.

Particularly the work of Russ Cox
   http://swtch.com/~rsc/

Particularly 
   Threads without Locks
   Bell Labs, Second International Plan 9 Workshop, December 2007
   http://swtch.com/~rsc/talks/threads07/

This talk has a nice crisp overview of the issues in different models
and mentions several real world applications

   concurrent prime sieve (by Mcllroy)
   file system indexer implementation
   publish and subscribe
   re-entrant IO multiplexing window systems
 http://swtch.com/~rsc/thread/cws.pdf   -- amazing 
stuff!
 http://video.google.com/videoplay?docid=810232012617965344
   the classic 'Squinting at Power Series' - and several others (see slide 31)
 http://swtch.com/~rsc/thread/squint.pdf
 (this could be an excellent test suite of any 'threading' implementation)

and extended in the work of PlanB  
 http://lsub.org/ls/planb.html
 http://lsub.org/index.html#demos



This model is available on many OSs in the port of Plan9 to user space
  http://swtch.com/plan9port/
and in C based libthread that builds multiple-reader, multiple-writer finite 
queues.
There is a lot to like and borrow from Plan9, including the 9P2000 protocol as 
a core organizing meme
  http://9p.cat-v.org/faq

The 'Spin' verification tool and it's history are *very* interesting also
  http://swtch.com/spin/


Note that many of the people doing 'go' were the ones that did Plan9 


Regards,
Todd Olson

PS   I'd really like to have their channel model available in Perl6
 Many things I'd like to model would work well with channels
 I have (unpublished) Perlish syntax to lay over channels

PPS  Russ has also done some nice work on regular expression engines
   http://swtch.com/~rsc/regexp/

threads?

2010-10-12 Thread Ben Goldberg
Has there been any decision yet over what model(s) of threads perl6
will support?

Will they be POSIX-like? ithread-like? green-thread-like?

It is my hope that more than one model will be supported... something
that would allow the most lightweight threads possible to be used
where possible, and ithread-like behavior for backwards compatibility,
and perhaps something in-between where the lightest threads won't
work, but ithreads are too slow.

If perl6 can statically (at compile time) analyse subroutines and
methods and determine if they're reentrant, then it could
automatically use the lightest weight threads when it knows that the
entry sub won't have side effects or alter global data.

If an otherwise-reentrant subroutine calls other subs which have been
labelled by their authors as thread-safe, then that top subroutine can
also be assumed to be thread-safe.  This would be when the
intermediate weight threads might be used.

If thread-unsafe subroutines are called, then something like ithreads
might be used.

To allow the programmer to force perl6 to use lighter threads than it
would choose by static analysis, he should be able to declare methods,
subs, and blocks to be reentrant or threads safe, even if they don't
look that way to the compiler.  Of course, he would be doing so at his
own risk, but he should be allowed to do it (maybe with a warning).



Re: threads?

2010-10-12 Thread Leon Timmermans
On Mon, Oct 11, 2010 at 12:32 AM, Ben Goldberg ben-goldb...@hotmail.com wrote:
 If thread-unsafe subroutines are called, then something like ithreads
 might be used.

For the love of $DEITY, let's please not repeat ithreads!


Re: threads?

2010-10-12 Thread Damian Conway
Leon Timmermans wrote:

 For the love of $DEITY, let's please not repeat ithreads!

$AMEN!

Backwards compatibility is not the major design criterion for Perl 6,
so there's no need to recapitulate our own phylogeny here.

The problem is: while most people can agree on what have proved to be
unsatisfactory threading models, not many people can seem to agree on
what would constititute a satisfactory threading model (or, possibly, models).

What we really need is some anecdotal evidence from folks who are actually
using threading in real-world situations (in *any* languages). What has worked
in practice? What has worked well? What was painful? What was error-prone?
And for which kinds of tasks?

And we also need to stand back a little further and ask: is threading
the right approach at all? Do threads work in *any* language? Are there
better metaphors?

Perhaps we need to think more Perlishly and reframe the entire question.
Not: What threading model do we need?, but: What kinds of non-sequential
programming tasks do we want to make easy...and how would we like to be
able to specify those tasks?

As someone who doesn't (need to) use threading to solve the kinds of
problems I work on, I'm well aware that I'm not the right person to help
in this design work. We need those poor souls who already suffer under
threads to share their tales of constant misery (and their occasional
moments of triumph) so we can identify successful patterns of use
and steal^Wborg^Wborrow the very best available solutions.

Damian


RE: threads?

2010-10-12 Thread philippe.beauchamp
Although anecdotal, I've heard good things about Go's channel mechanism as a 
simple lightweight concurrency model and a good alternative to typical 
threading. Channels are first-class in the language and leverage simple 
goroutine semantics to invoke concurrency.


--- Phil



-Original Message-
From: thoughtstr...@gmail.com [mailto:thoughtstr...@gmail.com] On Behalf Of 
Damian Conway
Sent: October 12, 2010 10:23 AM
To: perl6-language@perl.org
Subject: Re: threads?

Leon Timmermans wrote:

 For the love of $DEITY, let's please not repeat ithreads!

$AMEN!

Backwards compatibility is not the major design criterion for Perl 6,
so there's no need to recapitulate our own phylogeny here.

The problem is: while most people can agree on what have proved to be
unsatisfactory threading models, not many people can seem to agree on
what would constititute a satisfactory threading model (or, possibly, models).

What we really need is some anecdotal evidence from folks who are actually
using threading in real-world situations (in *any* languages). What has worked
in practice? What has worked well? What was painful? What was error-prone?
And for which kinds of tasks?

And we also need to stand back a little further and ask: is threading
the right approach at all? Do threads work in *any* language? Are there
better metaphors?

Perhaps we need to think more Perlishly and reframe the entire question.
Not: What threading model do we need?, but: What kinds of non-sequential
programming tasks do we want to make easy...and how would we like to be
able to specify those tasks?

As someone who doesn't (need to) use threading to solve the kinds of
problems I work on, I'm well aware that I'm not the right person to help
in this design work. We need those poor souls who already suffer under
threads to share their tales of constant misery (and their occasional
moments of triumph) so we can identify successful patterns of use
and steal^Wborg^Wborrow the very best available solutions.

Damian


Re: threads?

2010-10-12 Thread Matthew Walton
Damian, I use threads in C++ a lot in my day to day job. We use an
in-house library which isn't much more than a thread class which you
inherit from and give a Run method to, and a load of locks of various
(sometimes ill-defined) kinds.

Let me say: it's not good. Threads with semaphores and mutexes and all
that are just horrible, horrible things. It's probably not helped at
all by how C++ itself has no awareness at all of the threading, so
there are no hints in the code that something runs in a particular
thread, you can't put lock preconditions on functions or data
structures or anything like that...

I'm not sure what a better model is, but what I'd like to see is
something which:

- can enforce that certain bits of data are only accessed if you have
certain locks, at compile time
- can enforce that certain bits of code can only be run when you have
certain locks, at compile time
- can know that you shouldn't take lock B before lock A if you want to
avoid a deadlock
- uses a completely different model that nobody's probably thought of
yet where none of this matters because all those three things are
utterly foul

I always liked Software Transactional Memory, which works very nicely
in Haskell - but not for all solutions. Whatever concurrency model
Perl 6 might support, it's probably going to need more than one of
them. Since the language is so extensible, it may be that the core
should only implement the very basic primitives, and then there are
libraries which provide the rest - some of which might ship alongside
the compiler. I don't know, but I do not want people to end up having
to count semaphores and verify locking integrity by eye because it's
really, truly horrible.

I did read a bit about Go's mechanism, and it did look interesting.
Some systems are very well-modelled as completely independent
processes (which might be threads) throwing messages at each other...

Actually something that's very nice as a mental model for server-type
systems is a core routine which responds to a trigger (say, a new
connection) by spawning a new thread to handle it, which is the only
thing which handles it, and maybe uses something like channels to
interact with any global data store that's required. For that though
you need cheap thread creation or easy thread pool stuff, and you need
to have a global data model which isn't going to completely bottleneck
your performance.

I'm totally rambling now, but I do get the distinct impression from
all my experience that safe concurrency is very difficult to do
quickly in the general case. Of course, the safest concurrency boils
down to sequencing everything and running it all on one core...

On 12 October 2010 16:25,  philippe.beauch...@bell.ca wrote:
 Although anecdotal, I've heard good things about Go's channel mechanism as 
 a simple lightweight concurrency model and a good alternative to typical 
 threading. Channels are first-class in the language and leverage simple 
 goroutine semantics to invoke concurrency.


 --- Phil



 -Original Message-
 From: thoughtstr...@gmail.com [mailto:thoughtstr...@gmail.com] On Behalf Of 
 Damian Conway
 Sent: October 12, 2010 10:23 AM
 To: perl6-language@perl.org
 Subject: Re: threads?

 Leon Timmermans wrote:

 For the love of $DEITY, let's please not repeat ithreads!

 $AMEN!

 Backwards compatibility is not the major design criterion for Perl 6,
 so there's no need to recapitulate our own phylogeny here.

 The problem is: while most people can agree on what have proved to be
 unsatisfactory threading models, not many people can seem to agree on
 what would constititute a satisfactory threading model (or, possibly, models).

 What we really need is some anecdotal evidence from folks who are actually
 using threading in real-world situations (in *any* languages). What has worked
 in practice? What has worked well? What was painful? What was error-prone?
 And for which kinds of tasks?

 And we also need to stand back a little further and ask: is threading
 the right approach at all? Do threads work in *any* language? Are there
 better metaphors?

 Perhaps we need to think more Perlishly and reframe the entire question.
 Not: What threading model do we need?, but: What kinds of non-sequential
 programming tasks do we want to make easy...and how would we like to be
 able to specify those tasks?

 As someone who doesn't (need to) use threading to solve the kinds of
 problems I work on, I'm well aware that I'm not the right person to help
 in this design work. We need those poor souls who already suffer under
 threads to share their tales of constant misery (and their occasional
 moments of triumph) so we can identify successful patterns of use
 and steal^Wborg^Wborrow the very best available solutions.

 Damian




On 12 October 2010 16:25,  philippe.beauch...@bell.ca wrote:
 Although anecdotal, I've heard good things about Go's channel mechanism as 
 a simple lightweight concurrency model and a good alternative to typical

Re: threads?

2010-10-12 Thread Jon Lang
When Larry decided that Perl 6 would incorporate concepts from
prototype-based objects, he did so at least in part because it's more
intuitive for people to work with, e.g., a cow than it is to try to
work with the concept of a cow as a thing unto itself.  In a similar
way, I think that Perl's dominant concurrency system ought to be of a
type that people who aren't computer scientists can grok, at least
well enough to do something useful without first having to delve into
the arcane depths of computing theory.

As such, I'm wondering if an Actor-based concurrency model[1] might be
a better way to go than the current threads-based mindset.  Certainly,
it's often easier to think of actors who talk to each other to get
things done than it is to think of processes (or threads) as things
unto themselves.

[1] http://en.wikipedia.org/wiki/Actor_model

-- 
Jonathan Dataweaver Lang


Re: threads?

2010-10-12 Thread Leon Timmermans
On Tue, Oct 12, 2010 at 4:22 PM, Damian Conway dam...@conway.org wrote:
 The problem is: while most people can agree on what have proved to be
 unsatisfactory threading models, not many people can seem to agree on
 what would constititute a satisfactory threading model (or, possibly, models).

 What we really need is some anecdotal evidence from folks who are actually
 using threading in real-world situations (in *any* languages). What has worked
 in practice? What has worked well? What was painful? What was error-prone?
 And for which kinds of tasks?

Most languages either implement concurrency in a way that's not very
useful (CPython, CRuby) or implement it in a way that's slightly
(Java/C/C++) to totally (perl 5) insane. Erlang is the only language
I've worked with whose threads I really like, but sadly it's rather
weak at a lot of other things.

In general, I don't feel that a shared memory model is a good fit for
a high level language. I'm very much a proponent of message passing.
Unlike shared memory, it's actually easier to do the right thing than
not. Implementing it correctly and efficiently is not easier than
doing a shared memory system though in my experience (I'm busy
implementing it on top of ithreads; yeah I'm masochist like that).

 And we also need to stand back a little further and ask: is threading
 the right approach at all? Do threads work in *any* language? Are there
 better metaphors?

 Perhaps we need to think more Perlishly and reframe the entire question.
 Not: What threading model do we need?, but: What kinds of non-sequential
 programming tasks do we want to make easy...and how would we like to be
 able to specify those tasks?

I agree. I would prefer implicit over explicit concurrency wherever possible.


Re: threads?

2010-10-12 Thread Leon Timmermans
On Tue, Oct 12, 2010 at 10:28 PM, B. Estrade estr...@gmail.com wrote:
 I agree. I would prefer implicit over explicit concurrency wherever possible.

 I know you're speaking about the Perl interface to concurrency, but
 you seem to contradict yourself because message passing is explicit
 whereas shared memory is implicit - two different models, both of
 which could be used together to implement a pretty flexible system.

With implicit I mean stuff like concurrent hyperoperators and
junctions. Shared memory systems are explicitly concurrent to me
because you have to ether explicitly lock or explicitly do a
transaction.

 It'd be a shame to not provide a way to both use threads directly or
 to fallback to some implicitly concurrent constructs.

I agree


Re: threads?

2010-10-12 Thread Dave Whipp

Damian Conway wrote:



Perhaps we need to think more Perlishly and reframe the entire question.
Not: What threading model do we need?, but: What kinds of non-sequential
programming tasks do we want to make easy...and how would we like to be
able to specify those tasks?



The mindset that I use goes something like most tasks are potentially 
concurrent: sequentialization is an optimization that most people 
perform without even thinking.


Generally, I would split concurrency into producer-consumer (i.e. 
message passing) and stream-processing (for hyper and reduction 
operators -- possibly also for feeds, with a kernel per step). When 
dealing with compute-tasks, you're basically just choosing how to map a 
dependency graph to the available compute resources. When dealing with 
external resources (e.g. sockets, GUI) then explicit parallelism (via 
message passing) becomes useful.


P6 already specifies a whole bunch of non-sequential tasks (hypers, 
reductions, feeds, background-lazy lists), so no need to reframe the 
entire question just yet. Implementing the existing concurrency will 
flush out plenty of flaws in the specs.


Re: threads?

2010-10-12 Thread Karl Brodowsky
I agree that threads are generelly a difficult issue to cope.  What is
worse, there are a lot of Java-developers who tell us, that it is not
difficult for them,
but in the end the software fails on the productive system, for example
because the load is different then on the test system, causing different
threads
to be slowed down to a different extent etc.  So people who are having
difficulties with multithreading still use them a lot and don't admit
the difficulties
and they might not even appear during testing...

Even though I did see software that heavily uses multithreading and
works well.

On the other hand I think that there are certain tasks that need to use
some kind of parallelism, either for making use of parallel CPU
infrastructure or
for implementing patterns that can more easily be expressed using
something like multithreading.

I think that the approach of running several processes instead of
several threads is something that can be considered in some cases, but I
think it does
come with a performance price tag that might not be justified in all
situations.

Maybe the actor model from Scala is worth looking at, at least the
Scala-guys claim that that solves the issue, but I don't know if that
concept can easily
be adapted for Perl 6.

Best regards,

Karl



Re: Lessons to learn from ithreads (was: threads?)

2010-10-12 Thread Leon Timmermans
On Wed, Oct 13, 2010 at 12:46 AM, Tim Bunce tim.bu...@pobox.com wrote:
 So I'd like to use this sub-thread to try to identify when lessons we
 can learn from ithreads. My initial thoughts are:

 - Don't clone a live interpreter.
    Start a new thread with a fresh interpreter.

 - Don't try to share mutable data or data structures.
    Use message passing and serialization.

Actually, that sounds *exactly* like what I have been trying to
implementing for perl 5 based on ithreads (threads::lite, it's still
in a fairly early state though). My experience with it so far taught
me that:

* Serialization must be cheap for this to scale. For threads::lite
this turns out to be the main performance bottleneck. Erlang gets away
with this because it's purely functional and thus doesn't need to
serialize between local threads, maybe we could do something similar
with immutable objects. Here micro-optimizations are going to pay off.

* Code sharing is actually quite nice. Loading Moose separately in a
hundred threads is not. This is not trivial though, Perl being so
dynamic. I suspect this is not possible without running into the same
issues as ithreads does.

* Creating a thread (/interpreter) should be as cheap as possible,
both in CPU-time as in memory. Creating an ithread is relatively
expensive, specially memorywise. You can't realistically create a very
large number of them the way you can in Erlang.

Leon

(well actually I learned a lot more; like about non-deterministic unit
tests and profilers that don't like threads, but that's an entirely
different story)


Re: Gather/Take and threads

2006-12-13 Thread Larry Wall
On Wed, Dec 06, 2006 at 07:44:39PM -0500, Joe Gottman wrote:
: Suppose I have a gather block that spawns several threads, each of which
: calls take several times.  Obviously, the relative order of items returned
: from take in different threads is indeterminate, but is it at least
: guaranteed that no object returned from take is lost?

Currently gather/take is defined over a dynamic scope, and I think
that a different thread is a different dynamic scope (after all,
why does it have its own call stack?), so by default you get nothing
from another thread, and the other thread would get a take outside
of gather error.  You'd have to set up some kind of queue from the
other thread and take the output of that explicitly.

The microthreading done by hyperops perhaps doesn't count though.
In playing around the other day with a list flattener without explicit
recursion, I found that pugs currently returns hyper-takes in random
order:

pugs say ~gather { [1,2,[3,4],5].take }
1 2 4 5 3
pugs say ~gather { [1,2,[3,4],5].take }
2 1 4 5 3

The random order is according to spec, assuming take is allowed at all.
But perhaps it shouldn't be allowed, since the threaded side of a
hyperop could conceivably become as elaborate as a real thread, and
maybe we should not make a distinction between micro and macro threads.
And it's not like a take can guarantee the order anyway--the hyperop
is merely required to return a structure of the same shape, but that
does not enforce any order on the actual take calls (as shown above).
Only the return values of take end up in the same structure, which is
then thrown away.

In fact, I'd argue that the value returned by take is exactly the
value passed to it, but I don't think that's specced yet.  A take is
just a side effect performed en passant under this view.  Then we
could write things like:

while take foo() {...}
@thisbatch = take bar() {...}

Larry


Re: Gather/Take and threads

2006-12-13 Thread Larry Wall
Of course, it's also possible that the flipside is true--that
gather/take is just another normal way to set up interthread queueing,
if the thread is spawned in the dynamic scope of the gather.
Under that view all the subthreads share the outer dynamic scope.
Maybe that's saner...

Larry


Re: Gather/Take and threads

2006-12-13 Thread Larry Wall
On Wed, Dec 13, 2006 at 11:01:10AM -0800, Larry Wall wrote:
: Of course, it's also possible that the flipside is true--that
: gather/take is just another normal way to set up interthread queueing,
: if the thread is spawned in the dynamic scope of the gather.
: Under that view all the subthreads share the outer dynamic scope.
: Maybe that's saner...

And a subdivision of that view is whether subthreads are naturally
collected at the end of the dynamic scope that spawned them, or
whether they are considered independent unless some dynamic scope
claims them all for collection.  In that case a gather would one
way (the normal way?) to require termination of all subthreads before
terminating its lazy list.

Larry


Gather/Take and threads

2006-12-06 Thread Joe Gottman
Suppose I have a gather block that spawns several threads, each of which
calls take several times.  Obviously, the relative order of items returned
from take in different threads is indeterminate, but is it at least
guaranteed that no object returned from take is lost?

 

 

Joe Gottman

 



Threads, magic?

2006-10-20 Thread Relipuj

How does one do this:

http://www.davidnaylor.co.uk/archives/2006/10/19/threaded-data-collection-with-python-including-examples/

in perl 6? Assumin get_feed_list, get_feed_contents, parse_feed, and
store_feed_items are handled by modules like LWP and XML::Parser.

Will there be something native and magic like:
   @feeds = threaditize ( Paralelize = fetch_feeds, Parameters = @urls,
Thread_Limit = 20, Timeout = 5 sec);
with @urls being a list of urls, or a list of lists (each sublist would
contain the parameters for fetch_feeds).


-- Relipuj,
just curious.


Threads and types

2006-09-19 Thread Aaron Sherman
What happens to a program that creates a thread with a shared variable 
between it and the parent, and then the parent modifies the class from 
which the variable derives? Does the shared variable pick up the type 
change? Does the thread see this change?





Re: Threads and Progress Monitors

2003-05-31 Thread Dulcimer

--- Dave Whipp [EMAIL PROTECTED] wrote:
 Dulcimer wrote:
 sub slow_fn {
my $tick = Timer.new(60, { print ... });
return slow_fn_imp @_;
 }
 
 Now if I could just get the compiler to not complain about that
 unused variable...
  
  
  Maybe I'm being dense
  Why not just
   sub slow_fn {
 Timer.new(1, { print . });
 return slow_fn_imp @_;
   }

Geez. I read my response this morning, which I wrote just before going
to bed, and realized that I must've been dead on my feet.

 The problem is that I want the timer to last for the duration of the 
 slow_fn_imp. If I don't assign it to a variable, then it may be GCed
 at any time.

I was making several assumptions which don't hold, apparently, such as
that the underlying Timer would iterate until stopped. Not an ideal
default, lol I thopught the point was to have the function print
dots repeatedly, tho?

 I've just realised, however, that I'm relying on it being destroyed
 on leaving the scope. I'm not sure that the GC guarentees that.
 I might need
 
sub slow_fn {
  my $timer is last { .stop } = Timer.new(60, { print . });
  return slow_fn_imp @_;
}
 
 but that's starting to get cluttered again.

I don't really consider that clutter. It's clear and to the point,
and Does What You Want. How about 

  sub slow_fn {
my $timer is last { .stop } = 
   new Timer secs = 1, reset = 1, code = {print .};
return slow_fn_imp @_;
  }

so that the timer goes off after a second, prints a dot, and resets
itself to go off again after another second? And I still like the idea
of an expanding temporal window between dots:

  sub slow_fn {
my $pause = 1;
my $timer is last { .stop } = new Timer secs = $pause++,
   reset = {$pause++},
code = {print .};
return slow_fn_imp @_;
  }

As a sidenote, although it would actually reduce readability here, I'm
still trying to wrap my brain thoroughly around the new dynamics of $_.
Would this work correctly maybe?

  sub slow_fn {
my $timer is last { .stop } = new Timer secs = $_=1,
   reset = {$_++},
code = {print .};
return slow_fn_imp @_;
  }

Isn't that $_ proprietary to slow_fn such that it *would* work?

__
Do you Yahoo!?
Yahoo! Calendar - Free online calendar with sync to Outlook(TM).
http://calendar.yahoo.com


Timers (was Threads and Progress Monitors)

2003-05-31 Thread Dave Whipp
Dulcimer [EMAIL PROTECTED] wrote  so that the timer goes off after a
second, prints a dot, and resets
 itself to go off again after another second? And I still like the idea
 of an expanding temporal window between dots:

   sub slow_fn {
 my $pause = 1;
 my $timer is last { .stop } = new Timer secs = $pause++,
reset = {$pause++},
 code = {print .};
 return slow_fn_imp @_;
   }

I'm thinking there's a way to avoid the $pause variable:

  sub slow_fn
  {
my $tmp = new Timer( secs=1, code = { print . and
.reset(.count+1) });
return slow_fn_imp @_;
  }

But exposing the object like that still bothers be: I shouldn't need the
$tmp, nor the .new. When someone writes the Std::Timer module, we can add a
macro to it such that:

 sub slow_fn
 {
timeout(1) { print . and .reset(.count+1) };
return slow_fn_imp @_;
 }

I think the implementation is obvious, given the previous example of the
inline code. Though s/timeout/???/.


A semantic question: what output would you expect for this:

sub silly
{
   timeout(5) { print .HERE.; sleep 4; print .THERE. };
   for 1..5 - $count { sleep 2; print $count };
}

possible answers are

   12.HERE.34.THERE.5
or
   12.HERE..THERE.345

I'm thinking probably the latter, because its easier to launch a thread in
the codeblock than to un-launch it.

 As a sidenote, although it would actually reduce readability here, I'm
 still trying to wrap my brain thoroughly around the new dynamics of $_.
 Would this work correctly maybe?

   sub slow_fn {
 my $timer is last { .stop } = new Timer secs = $_=1,
reset = {$_++},
 code = {print .};
 return slow_fn_imp @_;
   }

 Isn't that $_ proprietary to slow_fn such that it *would* work?

I had to stare at it for a few moments, but yes: I think it should work (if
we define a .reset attribute that accepts a codeblock).

Dave.




Re: Timers (was Threads and Progress Monitors)

2003-05-31 Thread Dulcimer
sub slow_fn {
  my $pause = 1;
  my $timer is last { .stop } = new Timer secs = $pause++,
 reset = {$pause++},
  code = {print .};
  return slow_fn_imp @_;
}
 
 I'm thinking there's a way to avoid the $pause variable:
 
   sub slow_fn
   {
 my $tmp = new Timer( 
 secs=1, code = { print . and .reset(.count+1) });
 return slow_fn_imp @_;
   }
 
 But exposing the object like that still bothers be: I shouldn't need
 the $tmp, nor the .new.

I'm not so sure I agree with losing the new(). I kinda like that just
for readability. Less isn't always more. :)

Ok, how about this:

  sub slow_fn {
temp _.timer is last { .stop } = new Timer ( 
   secs = 1, code = { .reset += print . }
);
return slow_fn_imp @_;
  }

That's only superficially different, but is a little more aesthetically
satisfying, somehow. Then again, I'm a perverse bastard, lol... :)

On the other hand, if this threads, does each call to slow_fn() get a
unique _, or did I just completely hose the whole process? Could I say

  my temp _.timer is last { .stop } = new Timer ( ... ); # ?

or is it even necessary with temp?

 When someone writes the Std::Timer module, we can
 add a macro to it such that:
 
  sub slow_fn
  {
 timeout(1) { print . and .reset(.count+1) };
 return slow_fn_imp @_;
  }

Dunno -- I see what you're doing, but it's a little *too* helpful.
I'd rather see a few more of the engine parts on this one.

 I think the implementation is obvious, given the previous example of
 the inline code. Though s/timeout/???/.

alarm? trigger(), maybe?
 
 A semantic question: what output would you expect for this:
 
 sub silly {
timeout(5) { print .HERE.; sleep 4; print .THERE. };
for 1..5 - $count { sleep 2; print $count };
 } 
 possible answers are
12.HERE.34.THERE.5
 or
12.HERE..THERE.345
 I'm thinking probably the latter, because its easier to launch a
 thread in the codeblock than to un-launch it.

un-launch? If they're threaded, aren't they running asynchronously?
I see 12.HERE.34.THERE.5 as the interleaved output. I have no idea what
you mean by un-launch, sorry.

  As a sidenote, although it would actually reduce readability
  here, I'm still trying to wrap my brain thoroughly around the
  new dynamics of $_. Would this work correctly maybe?
 
sub slow_fn {
  my $timer is last { .stop } = new Timer secs = $_=1,
 reset = {$_++},
  code = {print .};
  return slow_fn_imp @_;
}
 
  Isn't that $_ proprietary to slow_fn such that it *would* work?
 
 I had to stare at it for a few moments, but yes: I think it should
 work (if we define a .reset attribute that accepts a codeblock).

lol -- I was assuming we'd have to make reset accept codeblocks, and
yes, I'd expect you to have to stare a bit. It's ugly, and I'd rather
create a new variable that do this, tho we've seen that you don't need
to.


__
Do you Yahoo!?
Yahoo! Calendar - Free online calendar with sync to Outlook(TM).
http://calendar.yahoo.com


Re: Timers (was Threads and Progress Monitors)

2003-05-31 Thread Dave Whipp
Dulcimer [EMAIL PROTECTED] wrote
  But exposing the object like that still bothers be: I shouldn't need
  the $tmp, nor the .new.

 I'm not so sure I agree with losing the new(). I kinda like that just
 for readability. Less isn't always more. :)

 Ok, how about this:

   sub slow_fn {
 temp _.timer is last { .stop } = new Timer (
secs = 1, code = { .reset += print . }
 );
 return slow_fn_imp @_;
   }

Wrong semantics: First, you're assuming that .reset is an attribute, rather
than a command (Yes, I believe the command/query separation, where
possible). Second, My intention was that if Cprint ever fails (e.g. broken
pipe), then I'd stop resetting the timer. Your program meerly stops
incrementing the timeout.

Even if we assume that the temp _.prop thing works, I'm not sure I'd want
it littering my code. I could see it being used in a macro defn though.

 Dunno -- I see what you're doing, but it's a little *too* helpful.
 I'd rather see a few more of the engine parts on this one.

We can expose a few more parts by having the macro return the timer object,
so you could write:

my $timer = timeout(60) { ... };

  A semantic question: what output would you expect for this:
 
  sub silly {
 timeout(5) { print .HERE.; sleep 4; print .THERE. };
 for 1..5 - $count { sleep 2; print $count };
  }
  possible answers are
 12.HERE.34.THERE.5
  or
 12.HERE..THERE.345
  I'm thinking probably the latter, because its easier to launch a
  thread in the codeblock than to un-launch it.

 un-launch? If they're threaded, aren't they running asynchronously?
 I see 12.HERE.34.THERE.5 as the interleaved output. I have no idea what
 you mean by un-launch, sorry.

Sorry, it was my feeble attempt at humor.

What I was getting at is that, if we assume the codeblock executes asa
coroutine, then you'd get the latter output. If you wanted a thread, you
could write:

  timeout(5) { thread { ... } };

but if we assume that the codeblock is launched as an asynchronous thread,
then there is no possible way the coerce it back into the coroutine (i.e. to
un-launch it).


Now here's another semantics question: would we want the following to be
valid?

sub slow
{
   timeout(60) { return undef but Error: timed out };
   return @slow_imp;
}

How about:

sub slow
{
   timeout(60) { throw TimeoutException.new(Error: slow_fn timed out) };
   return @slow_imp;
}

Dave.




Re: Timers (was Threads and Progress Monitors)

2003-05-31 Thread Dulcimer

--- Dave Whipp [EMAIL PROTECTED] wrote:
 Dulcimer [EMAIL PROTECTED] wrote
   But exposing the object like that still bothers be: I shouldn't
   need the $tmp, nor the .new.
 
  I'm not so sure I agree with losing the new(). I kinda like that
  just for readability. Less isn't always more. :)
 
  Ok, how about this:
 
sub slow_fn {
  temp _.timer is last { .stop } = new Timer (
 secs = 1, code = { .reset += print . }
  );
  return slow_fn_imp @_;
}
 
 Wrong semantics: First, you're assuming that .reset is an attribute,
 rather than a command (Yes, I believe the command/query separation,
 where possible).

Ok. And that leads to the next thing --

 Second, My intention was that if Cprint ever fails (e.g. broken
 pipe), then I'd stop resetting the timer. Your program meerly stops
 incrementing the timeout.

Agreed. 

 Even if we assume that the temp _.prop thing works, I'm not sure
 I'd want it littering my code. I could see it being used in a macro
 defn though.

Maybe. It isn't pretty, but I've seen worse. Hell, I've posted worse.
:)

  Dunno -- I see what you're doing, but it's a little *too* helpful.
  I'd rather see a few more of the engine parts on this one.
 
 We can expose a few more parts by having the macro return the timer
 object, so you could write:
 
 my $timer = timeout(60) { ... };

Ok, poorly phrased on my part. I just meant I'd like to visually see
more of what's going on in the code. In other words, I'm not fond of
the syntax proposal. I find Ctimeout(60) { ... } too terse, and would
rather see a more verbose version. Merely a style issue, though. Still,
your response to what it *looked* like I meant is a good idea, too.

   A semantic question: what output would you expect for this:
  
   sub silly {
  timeout(5) { print .HERE.; sleep 4; print .THERE. };
  for 1..5 - $count { sleep 2; print $count };
   }
   possible answers are
  12.HERE.34.THERE.5
   or
  12.HERE..THERE.345
   I'm thinking probably the latter, because its easier to launch a
   thread in the codeblock than to un-launch it.
 
  un-launch? If they're threaded, aren't they running asynchronously?
  I see 12.HERE.34.THERE.5 as the interleaved output. I have no idea
  what you mean by un-launch, sorry.
 
 Sorry, it was my feeble attempt at humor.

lol -- and I was too dense to get it. :)

 What I was getting at is that, if we assume the codeblock executes
 asa coroutine, then you'd get the latter output. If you wanted a
 thread, you could write:
 
   timeout(5) { thread { ... } };
 
 but if we assume that the codeblock is launched as an asynchronous
 thread, then there is no possible way the coerce it back into the
 coroutine (i.e. to un-launch it).

Ah. Ok, but if that's the case, you could as easily write it 

   timeout(5) { coro { ... } };

and have the compiler build it accordingly. The same logic works either
way from that end. Thread seem more sensible for a timeout, but as a
general rule I'd probably prefer to see implicit coro's rather than
implicit threads as the default.

 Now here's another semantics question: would we want the following to
 be valid?
 
 sub slow
 {
timeout(60) { return undef but Error: timed out };
return @slow_imp;
 }

Dunnoreturns from a single routing that are in different threads
could produce some real headaches.

 How about:
 
 sub slow {
timeout(60) { 
   throw TimeoutException.new(Error: slow_fn timed out)
};
return @slow_imp;
 }

I like thata lot better, but I'm still not sure how it would fly.
(Sorry for the reformat, btw -- got pretty cramped on my screen.)

__
Do you Yahoo!?
Yahoo! Calendar - Free online calendar with sync to Outlook(TM).
http://calendar.yahoo.com


Re: Timers (was Threads and Progress Monitors)

2003-05-31 Thread Dave Whipp

Dulcimer [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
 I find Ctimeout(60) { ... } too terse, and would
 rather see a more verbose version

I'm obviously more lazy than you ;-).

 Ah. Ok, but if that's the case, you could as easily write it

timeout(5) { coro { ... } };

 and have the compiler build it accordingly. The same logic works either
 way from that end.

Given that its a macro, its probably true that I could do some up-front
manipulation like that. But its a lot more work than launching a thread.
However, given that it is a macro, we could eliminate the outer-curlies.
Lets see if I know how to write a macro...

   macro timeout is parsed( rx:w/
  $secs:= Perl.expression
  $code_type := ( thread | coro )?
  $code := Perl.block/)
  {
  $code_type eq thread and $code = { thread $code };
  my \$$(Perl.tmpvarname) = Timer.new(secs=$secs, code=$code);
  }

  timeout(60) coro { ... }
  timeout(60) thread { ... }
  timeout(60) { ... } # default is coro

Even if it works, I have a feeling that the macro has plenty of scope for
improvement.

  Now here's another semantics question: would we want the following to
  be valid?
 
  sub slow
  {
 timeout(60) { return undef but Error: timed out };
 return slow_imp;
  }

 Dunnoreturns from a single routing that are in different threads
 could produce some real headaches.

  How about:
 
  sub slow {
 timeout(60) {
throw TimeoutException.new(Error: slow_fn timed out)
 };
 return slow_imp;
  }

 I like thata lot better, but I'm still not sure how it would fly.

Actually, I think that both have pretty-much the same problems. I assume
@Dan could work out a way to get the mechanics to work (basically a mugging:
we kill a thread/coro and steal its continuation point). But the semantics
of the cleanup could cause a world of pain. But realistically, a common use
of a timeout is to kill something that's been going on too long. I suppose
we could require some level of cooperation the the dead code.


Dave.




Threads and Progress Monitors

2003-05-30 Thread Dave Whipp
OK, we've beaten the producer/consumer thread/coro model to death. Here's a
different use of threads: how simple can we make this in P6:

  sub slow_func
  {
  my $percent_done = 0;
  my $tid = thread { slow_func_imp( \$percent_done ) };
  thread { status_monitor($percent_done) and sleep 60 until
$tid.done };
  return wait $tid;
  }

I think this would work under Austin's A17; but it feels a bit clunky. The
fact that the sleep 60 isn't broken as soon as the function is done is
untidy, though I wouldn't want to start killing thread.

perhaps:

  {
  ...
  $tid = thread { slow... }
  status_monitor(\$percent_done) and wait(60 | $tid) until $tid.done;
  return $tid.result;
  }

The thing is, that wait 60 probably can't work -- replace C60 with
C$period, and the semantics change. There are some obvious hacks that
could work here: but is there a really nice solution. Ideally, we won't need
the low level details such as C$tid


Dave.




Re: Threads and Progress Monitors

2003-05-30 Thread Austin Hastings

--- Dave Whipp [EMAIL PROTECTED] wrote:
 OK, we've beaten the producer/consumer thread/coro model to death.
 Here's a
 different use of threads: how simple can we make this in P6:
 
   sub slow_func
   {
   my $percent_done = 0;
   my $tid = thread { slow_func_imp( \$percent_done ) };
   thread { status_monitor($percent_done) and sleep 60 until
 $tid.done };
   return wait $tid;
   }
 
 I think this would work under Austin's A17; but it feels a bit
 clunky. The
 fact that the sleep 60 isn't broken as soon as the function is done
 is
 untidy, though I wouldn't want to start killing thread.
 
 perhaps:
 
   {
   ...
   $tid = thread { slow... }
   status_monitor(\$percent_done) and wait(60 | $tid) until
 $tid.done;
   return $tid.result;
   }
 
 The thing is, that wait 60 probably can't work -- replace C60
 with
 C$period, and the semantics change. There are some obvious hacks
 that
 could work here: but is there a really nice solution. Ideally, we
 won't need
 the low level details such as C$tid

sub slow_func_imp {
  my $pct_done = 0;
  ...
  yield $pct_done++;   # Per my recent message
  ...
}

sub slow_func {
  my $tid := thread slow_func_imp;

  status_monitor($tid.resume)
while $tid.active;
}




Re: Threads and Progress Monitors

2003-05-30 Thread Michael Lazzaro
On Thursday, May 29, 2003, at 10:47 AM, Dave Whipp wrote:

OK, we've beaten the producer/consumer thread/coro model to death. 
Here's a
different use of threads: how simple can we make this in P6:
Hey, good example.  Hmm...

Well, for starters I think it wouldn't be a big deal to associate a 
progress attribute with each thread object.  It should be that 
thread's responsibility to fill it out, if it wants to -- so you 
shouldn't ever have to pass \$percent_done as an argument, it should be 
a basic attribute of every thread instance.  That might encourage 
people to add progress calculations to their threads after-the-fact, 
without changing the basic interface of what they wrote.

I'll also claim that I would still prefer the auto-parallel, 
auto-lazy-blocking behavior on the thread results we've mused about 
previously.  So coming from the semantics end, I'd love to see it 
written like this:

   # Declaring a threaded calculation

   sub slow_func_impl is threaded {
   while (...stuff...) {
   ... do stuff ...
   _.thread.progress += 10.0;   # or however you want to 
guesstimate[*] this
   }
   return $result;
   }

   # If you don't care about getting the actual thread object, just the 
result,
   # call it this way:

   {
   ...
   my $result = slow_func_impl(...);
   ...
   return $result;
   }
   # But if you want to get the thread object, so you can monitor it's 
progress,
   # call it this way:

   {
   ...
   my $tid = thread slow_func_impl(...);
   while $tid.active {
   status_monitor($tid.progress);
   sleep 60;
   }
   return $tid.result;
   }
To my eye, that looks pretty darn slick.

MikeL

[*] Huh.  Imagine my surprise to find out that my spellcheck considers 
guesstimate to be a real word.  And I always thought that was just a 
spasmostical pseudolexomangloid.



Re: Threads and Progress Monitors

2003-05-30 Thread Dave Whipp
Michael Lazzaro [EMAIL PROTECTED] wrote in # But if you want
to get the thread object, so you can monitor it's

 {
 ...
 my $tid = thread slow_func_impl(...);
 while $tid.active {
 status_monitor($tid.progress);
 sleep 60;
 }
 return $tid.result;
 }

 To my eye, that looks pretty darn slick.

You might be a bit frustrated if the slow_func_impl took 61 seconds :-(.
How do we interrupt the Csleep? Possibly in the same way as we'd timeout a
blocking IO operations. But I wonder if this could work:

  my $tid = thread slow_func_impl(...);
  until wait $tid, timeout=60
  {
  status_monitor($tid.progress);
  }
  return $tid.result;

Here I assume that Cwait returns a true value if its waited condition
occurs, but false if it times out.

Hmm, A few days ago I tried indroducing a syntax for thread with a
sensitivity list in place of an explict loop-forever thread. Perhaps I can
reuse that syntax:

  my $tid = thread slow_func_impl(...);
  thread $tid | timeout(60)
  {
  when $tid = { return $tid.result }
  default = { status_monitor $tid.progress }
  }

Perhaps a different keyword would be better: Calways as the looping
counterpart to Cwait -- then extend Cwait to accept a code block.

Dave.




Re: Threads and Progress Monitors

2003-05-30 Thread John Macdonald
On Thu, May 29, 2003 at 10:47:35AM -0700, Dave Whipp wrote:
 OK, we've beaten the producer/consumer thread/coro model to death. Here's a
 different use of threads: how simple can we make this in P6:
 
   sub slow_func
   {
   my $percent_done = 0;
   my $tid = thread { slow_func_imp( \$percent_done ) };
   thread { status_monitor($percent_done) and sleep 60 until
 $tid.done };
   return wait $tid;
   }

At first glance, this doesn't need a thread - a
coroutine is sufficient.  Resume the status update
coroutine whenever there has been some progress.
It doesn't wait and poll a status variable, it just
let the slow function work at its own speed without
interruption until there is a reason to change the
display.

In fact, it probably doesn't need to be a coroutine
either.  A subroutine - display_status( $percent ) -
should't require any code state to maintain, just a
bit if data so all it needs is a closure or an object.

At second glance, there is a reason for a higher
powered solution.  If updating the display to a new
status takes a significant amount of time, especially
I/O time, it would both block the slow function
unnecessarily and would update for every percent
point change.  Using a separate process or thread
allows the function to proceed without blocking, and
allows the next update to jmp ahead to the current
actual level, skipping all of the levels that occurred
while the previous display was happening.  Instead of
sleep, though, I'd use a pipeline and read it with
a non-blocking read until there is no data.  Then,
if the status has changed since the last update, do
a display update and repeat the non-blocking read.
If the status has not changed, do a blocking read to
wait for the next status change.


Re: Threads and Progress Monitors

2003-05-30 Thread Dave Whipp
John Macdonald [EMAIL PROTECTED] wrote  At first glance, this
doesn't need a thread - a
  Instead of
 sleep, though, I'd use a pipeline and read it with
 a non-blocking read until there is no data.  ...

++ For the lateral thinking. Definitely a valid solution to the problem, as
given. So I'll change the problem prevent it: the slow fn is a 3rd-party
blob with no access to source code and no progress indication.

sub slow_fn {
   print starting slow operation: this sometimes takes half an hour!\n;
   my $tid = thread { slow_fn_imp @_ };
   $start = time;
   loop {
   wait $tid | timeout(60);
   return $tid.result if $tid.done;
   print ... $(time-$start) seconds\n;
   }
}

Still a bit too complex for my taste: perhaps we can use Ctimeout to
generate exceptions:

  my lazy::threaded $result := { slow_fn_imp @_ };
  loop {
timeout(60);
return $result;
CATCH Timeout { print ...$(time)\n }
 }

At last, no Ctid! (Reminder: the suggested semantics of the threaded
variable were that a FETCH to it blocks until the result of the thread is
available).


Dave.




Re: Threads and Progress Monitors

2003-05-30 Thread Luke Palmer
Dave wrote:
 Still a bit too complex for my taste: perhaps we can use Ctimeout to
 generate exceptions:
 
   my lazy::threaded $result := { slow_fn_imp @_ };
   loop {
 timeout(60);
 return $result;
 CATCH Timeout { print ...$(time)\n }
  }
 
 At last, no Ctid! (Reminder: the suggested semantics of the threaded
 variable were that a FETCH to it blocks until the result of the thread is
 available).

To nitpick:

my $result is lazy::threaded := { slow_fn_imp @_ };

Because lazy::threaded isn't the Ireturn type, it's the Ivariable
type.

loop {
timeout(60);
return $result;
CATCH {
when Timeout { print ...$(time)\n
}
}

Because CCATCH is like Cgiven $!.

I like that elegant use of threaded variables, by the way.

Now write the Ctimeout function :-P.

Luke


Re: Threads and Progress Monitors

2003-05-30 Thread Michael Lazzaro
On Thursday, May 29, 2003, at 12:45 PM, Dave Whipp wrote:
Michael Lazzaro [EMAIL PROTECTED] wrote in # But if 
you want
to get the thread object, so you can monitor it's
{
...
my $tid = thread slow_func_impl(...);
while $tid.active {
status_monitor($tid.progress);
sleep 60;
}
return $tid.result;
}
To my eye, that looks pretty darn slick.
You might be a bit frustrated if the slow_func_impl took 61 seconds 
:-(.
How do we interrupt the Csleep? Possibly in the same way as we'd 
timeout a
blocking IO operations.
Personally, I'd be happy with just making the Csleep a smaller 
number, like one second, or a fifth of a second, or whatever.  You want 
the status_monitor to be updated no more often than it needs to be, but 
often enough that it's not lagging.

But if you really wanted wake-immediately-upon-end, I'd add that as a 
variant of Csleep.  For example, you might want a variant that 
blocked until a given variable changed, just like in debuggers; that 
would allow:

{
my $tid = thread slow_func_impl(...);
while $tid.active {
status_monitor($tid.progress);
sleep( 60, watch = \($tid.progress) ); # do you even 
need the '\'?
}
return $tid.result;
}

... which would sleep 60 seconds, or until the .progress attribute 
changed, whichever came first.

You could make more builtins for that, but I think I'd like them to 
just be Csleep or Cwait variants.  Obvious possibilities:

sleep 60; # sleep 60 seconds
sleep( block = $tid );   # sleep until given thread is complete
sleep( watch = \$var );  # sleep until given var changes value
sleep( 60, block = $tid, watch = [\$var1, \$var2, \$var3] );  
five tests

$tid.sleep(...);# sleep the given thread, instead of this one

MikeL



Re: Threads and Progress Monitors

2003-05-30 Thread Michael Lazzaro
On Thursday, May 29, 2003, at 04:48 PM, Luke Palmer wrote:
To nitpick:

my $result is lazy::threaded := { slow_fn_imp @_ };
Pursuing this lazy-threaded variables notion, a question.  Given:

 sub slow_func is threaded {# me likey this 
auto-parallelizing syntax!
 ...
 }

Would we want to say that _both_ of these have the lazy-blocking 
behavior?

 my $result := slow_func();
 print $result;
 my $result  = slow_func();
 print $result;
Or would the first one block at Cprint, but the second block 
immediately at the C=?

The obvious answer is that the := binding passes through the 
lazyness, but the = assignment doesn't.  But I wonder if that isn't a 
bit too obscure, to put it mildly.

MikeL



Re: Threads and Progress Monitors

2003-05-30 Thread Dulcimer

 sub slow_fn {
my $tick = Timer.new(60, { print ... });
return slow_fn_imp @_;
 }
 
 Now if I could just get the compiler to not complain about that
 unused variable...

Maybe I'm being dense
Why not just
 sub slow_fn {
   Timer.new(1, { print . });
   return slow_fn_imp @_;
 }

or maybe even

 sub slow_fn {
   my $tick = 1;
   Timer.new({$tick++}, { print . });
   return slow_fn_imp @_;
 }
For a slowly slowing timer
?

Or to my taste,
 sub slow_fn {
   Timer.new(60, { print ... });
   return slow_fn_imp @_;
 }

__
Do you Yahoo!?
Yahoo! Calendar - Free online calendar with sync to Outlook(TM).
http://calendar.yahoo.com


Re: Threads and Progress Monitors

2003-05-30 Thread Dave Whipp
Dulcimer wrote:
sub slow_fn {
  my $tick = Timer.new(60, { print ... });
  return slow_fn_imp @_;
}
Now if I could just get the compiler to not complain about that
unused variable...


Maybe I'm being dense
Why not just
 sub slow_fn {
   Timer.new(1, { print . });
   return slow_fn_imp @_;
 }
The problem is that I want the timer to last for the duration of the 
slow_fn_imp. If I don't assign it to a variable, then it may be GCed at 
any time.

I've just realised, however, that I'm relying on it being destroyed on 
leaving the scope. I'm not sure that the GC guarentees that. I might need

  sub slow_fn {
my $timer is last { .stop } = Timer.new(60, { print . });
return slow_fn_imp @_;
  }
but that's starting to get cluttered again.

Dave.



Re: Threads and Progress Monitors

2003-05-30 Thread Paul Johnson

Dave Whipp said:

 I've just realised, however, that I'm relying on it being destroyed on
 leaving the scope. I'm not sure that the GC guarentees that.

GC doesn't, but I would be surprised if Perl 6 doesn't and in that case
Parrot will be accommodating.

Take a look at the recent p6i archives for the gory details.

-- 
Paul Johnson - [EMAIL PROTECTED]
http://www.pjcj.net



Re: How shall threads work in P6?

2003-04-04 Thread Dave Mitchell
On Tue, Apr 01, 2003 at 08:44:25AM -0500, Dan Sugalski wrote:
 There isn't any, particularly. We're doing preemptive threads. It 
 isn't up for negotiation. This is one of the few things where I truly 
 don't care what people's opinions on the matter are.

Sorry, I haven't been following this too closely - but is it the intention
to support the 5.005, or the ithreads model (or both? or neither?).

-- 
To collect all the latest movies, simply place an unprotected ftp server
on the Internet, and wait for the disk to fill


Re: How shall threads work in P6?

2003-04-04 Thread Larry Wall
On Thu, Apr 03, 2003 at 08:37:46PM +0100, Dave Mitchell wrote:
: On Tue, Apr 01, 2003 at 08:44:25AM -0500, Dan Sugalski wrote:
:  There isn't any, particularly. We're doing preemptive threads. It 
:  isn't up for negotiation. This is one of the few things where I truly 
:  don't care what people's opinions on the matter are.
: 
: Sorry, I haven't been following this too closely - but is it the intention
: to support the 5.005, or the ithreads model (or both? or neither?).

At a language level this has very little to do with whether treads are
implemented preemptively or not.  The basic philosophical difference
between the pthreads and ithreads models is in whether global variables
are shared by default or not.  The general consensus up till now is
that having variables unshared by default is the cleaner approach,
though it does cost more to spawn threads that way, at least the way
Perl 5 implements it.  The benefit of ithreads over pthreads may be
somewhat lessened in Perl 6.  There may well be intermediate models in
which some kinds of variables are shared by default and others are not.
We don't know how that will work--we haven't run the experiment yet.

For anything other than existential issues, I believe that most
arguments about the future containing the words either, or,
both, or neither are likely to be wrong.  In particular, human
psychology is rarely about the extremes of binary logic.  As creatures
we are more interested in balance than in certainty, all the while
proclaiming our certainty that we've achieved the correct balance.

In short, I think both the pthreads and ithreads models are wrong
to some extent.

Larry


Re: How shall threads work in P6? [OT :o]

2003-04-04 Thread Paul

--- Larry Wall [EMAIL PROTECTED] wrote:
 For anything other than existential issues, I believe that
 most arguments about the future containing the words either,
 or, both, or neither are likely to be wrong. In
 particular, human psychology is rarely about the extremes
 of binary logic. As creatures we are more interested in
 balance than in certainty, all the while proclaiming our
 certainty that we've achieved the correct balance. 
 In short, I think both the pthreads and ithreads models are wrong
 to some extent.
 Larry

soapbox sincerity=high relevance=low
And this is why I believe it's best to have a philosopher guiding the
design, as long as he's been properly indoctrinated into the issues
surrounding the relevant mechanics. ;o]
/soapbox



__
Do you Yahoo!?
Yahoo! Tax Center - File online, calculators, forms, and more
http://tax.yahoo.com


Re: How shall threads work in P6?

2003-04-01 Thread Simon Cozens
[EMAIL PROTECTED] (Matthijs Van Duin) writes:
 Well, if you optimize for the most common case, throw out threads altogether.
 
 Well, I almost would agree with you since cooperative threading can
 almost entirely be done in perl code, since they are built in
 continuations.  I actually gave an example of that earlier.

You thoroughly missed my point, but then I didn't make it very clearly:
the Huffman-encoding argument works well for language design, but doesn't
apply too well for implementation design.

We'll not bother implementing an exponentiation operator since
exponentiation is only used very rarely in Perl programs and we can
get around it with some shifts, multiplication and a loop if we need
it.

-- 
  They laughed at Columbus, they laughed at Fulton, they laughed at the
   Wright brothers.  But they also laughed at Bozo the Clown.
 -- Carl Sagan


Re: How shall threads work in P6?

2003-04-01 Thread Dan Sugalski
At 11:09 AM -0800 3/31/03, Austin Hastings wrote:
--- Dan Sugalski [EMAIL PROTECTED] wrote:
 At 8:13 PM +0200 3/31/03, Matthijs van Duin wrote:
 On Mon, Mar 31, 2003 at 07:45:30AM -0800, Austin Hastings wrote:
 I've been thinking about closures, continuations, and coroutines,
 and
 one of the interfering points has been threads.
 
 What's the P6 thread model going to be?
 
 As I see it, parrot gives us the opportunity to implement
 preemptive
 threading at the VM level, even if it's not available via the OS.
 
 I think we should consider cooperative threading, implemented using
 continuations.  Yielding to another thread would automatically
 happen when a thread blocks, or upon explicit request by the
 programmer.
 
 It has many advantages:
 And one disadvantage:

 Dan doesn't like it. :)

 Well, there are actually a lot of disadvantages, but that's the only
 important one, so it's probably not worth much thought over alternate
 threading schemes for Parrot at least--it's going with an OS-level
 preemptive threading model.
 No, this isn't negotiable.
More information please.
There isn't any, particularly. We're doing preemptive threads. It 
isn't up for negotiation. This is one of the few things where I truly 
don't care what people's opinions on the matter are.
--
Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk


Re: How shall threads work in P6?

2003-04-01 Thread Dan Sugalski
At 7:35 AM -0800 4/1/03, Austin Hastings wrote:
--- Dan Sugalski [EMAIL PROTECTED] wrote:
 At 11:09 AM -0800 3/31/03, Austin Hastings wrote:
 --- Dan Sugalski [EMAIL PROTECTED] wrote:
   At 8:13 PM +0200 3/31/03, Matthijs van Duin wrote:
   On Mon, Mar 31, 2003 at 07:45:30AM -0800, Austin Hastings wrote:
   I've been thinking about closures, continuations, and
 coroutines,
   and
   one of the interfering points has been threads.
   
   What's the P6 thread model going to be?
   
   As I see it, parrot gives us the opportunity to implement
   preemptive
   threading at the VM level, even if it's not available via the
 OS.
   
   I think we should consider cooperative threading, implemented
 using
   continuations.  Yielding to another thread would automatically
   happen when a thread blocks, or upon explicit request by the
   programmer.
   
   It has many advantages:
 
   And one disadvantage:
 
   Dan doesn't like it. :)
 
   Well, there are actually a lot of disadvantages, but that's the
 only
   important one, so it's probably not worth much thought over
 alternate
   threading schemes for Parrot at least--it's going with an
 OS-level
   preemptive threading model.
 
   No, this isn't negotiable.
 
 More information please.
 There isn't any, particularly. We're doing preemptive threads. It
 isn't up for negotiation. This is one of the few things where I truly
 don't care what people's opinions on the matter are.
Okay, but what does OS-level mean? Are you relying on the OS for
implementing the threads (a sub-optimal idea, IMO) or something else?
Yes, we're using the OS-level threading facilities as part of the 
threading implementation.
--
Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk


How shall threads work in P6?

2003-03-31 Thread Austin Hastings
Okay,

I've been thinking about closures, continuations, and coroutines, and
one of the interfering points has been threads.

What's the P6 thread model going to be?

As I see it, parrot gives us the opportunity to implement preemptive
threading at the VM level, even if it's not available via the OS.

Thinking about coroutines and continuations leads to the conclusion
that you need a segmented stack (duh). But it also makes you wonder
about how that stack interoperates with the threading subsystem. If
there's one stack per thread (obvious) then you're committing to an
overt threading model (user declares threads). 

If the stacks are allowed to fork (which they must, to support even
Damian's generator-style coroutines) then there's the possibility of
supporting a single, unified stack-tree which means that generators
might contain their state across threads, and full coroutines may run
in parallel.

Anyway, I though the list was too quiet...

=Austin



Re: How shall threads work in P6?

2003-03-31 Thread Matthijs van Duin
On Mon, Mar 31, 2003 at 07:45:30AM -0800, Austin Hastings wrote:
I've been thinking about closures, continuations, and coroutines, and
one of the interfering points has been threads.
What's the P6 thread model going to be?

As I see it, parrot gives us the opportunity to implement preemptive
threading at the VM level, even if it's not available via the OS.
I think we should consider cooperative threading, implemented using 
continuations.  Yielding to another thread would automatically happen when 
a thread blocks, or upon explicit request by the programmer.

It has many advantages:
1. fast: low task switching overhead and no superfluous task switching
2. no synchronization problems.  locking not needed in most common cases
3. thanks to (2), shared state by default without issues
4. most code will not need any special design to be thread-safe, even when 
it uses globals shared by all threads.
5. no interference with continuations etc, since they're based on it
6. less VM code since an existing mechanism is used, which also means 
less code over which to spread optimization efforts

And optionally if round-robin scheduling is really desired for some odd 
reason (it's not easy to think of a situation) then that can be easily added 
by using a timer of some kind that does a context switch - but you'd regain 
the synchronization problems you have with preemptive threading.

One problem with this threading model is that code that runs a long time 
without blocking or yielding will hold up other threads.  Preventing rude 
code from affecting the system is one of the reasons modern OSes use 
preemptive scheduling.  This problem is obviously much smaller in perl 
scripts however since all of the code is under control of the programmer. 
And if a CPAN module contains rude code, this would be known soon enough. 
(the benefits of Open Source :-)

Another problem is the inability to easily take advantage of symmetrical 
multiprocessing, but this basically only applies to code that does heavy 
computation.

I think if we apply the Huffman principle here by optimizing for the most 
common case, cooperative threading wins from preemptive threading.

People who really want to do SMP should just fork() and use IPC, or use 
the Thread::Preemptive module which *someone* will no doubt write :-)

--
Matthijs van Duin  --  May the Forth be with you!


Re: How shall threads work in P6?

2003-03-31 Thread Simon Cozens
[EMAIL PROTECTED] (Matthijs Van Duin) writes:
 I think if we apply the Huffman principle here by optimizing for the
 most common case, cooperative threading wins from preemptive threading.

Well, if you optimize for the most common case, throw out threads altogether.

-- 
The bad reputation UNIX has gotten is totally undeserved, laid on by people
 who don't understand, who have not gotten in there and tried anything.
-- Jim Joyce, former computer science lecturer at the University of California


Re: How shall threads work in P6?

2003-03-31 Thread Michael G Schwern
On Mon, Mar 31, 2003 at 08:13:09PM +0200, Matthijs van Duin wrote:
 I think we should consider cooperative threading, implemented using 
 continuations.  Yielding to another thread would automatically happen when 
 a thread blocks, or upon explicit request by the programmer.
 
 It has many advantages:

It has major disadvantages:

I must write my code so each operation only takes a small fraction of time
or I must try to predict when an operation will take a long time and yield
periodicly.

Worse, I must trust that everyone else has written their code to the above
spec and has accurately predicted when their code will take a long time.


Cooperative multitasking is essentially syntax sugar for an event loop.  We
already have those (POE, Event, MultiFinder).  They're nice when you don't 
have real, good preemptive threads, but cannot replace them.  It is a great
leap forward to 1987.

The simple reason is that with preemptive threads I don't have to worry 
about how long an operation is going to take and tailor my code to it, 
the interpreter will take care of it for me.  All the other problems with 
preemptive threads aside, that's the killer app.


We need preemptive threads.  We need good support at the very core of the
langauge for preemptive threads.  perl5 has shown what happens when you
bolt them on both internally and externally.  It is not something we can
leave for later.

Cooperative multitasking, if you really want it, can be bolted on later or
provided as an alternative backend to a real threading system.


-- 
I'm spanking my yacht.


Re: How shall threads work in P6?

2003-03-31 Thread Austin Hastings

--- Dan Sugalski [EMAIL PROTECTED] wrote:
 At 8:13 PM +0200 3/31/03, Matthijs van Duin wrote:
 On Mon, Mar 31, 2003 at 07:45:30AM -0800, Austin Hastings wrote:
 I've been thinking about closures, continuations, and coroutines,
 and
 one of the interfering points has been threads.
 
 What's the P6 thread model going to be?
 
 As I see it, parrot gives us the opportunity to implement
 preemptive
 threading at the VM level, even if it's not available via the OS.
 
 I think we should consider cooperative threading, implemented using 
 continuations.  Yielding to another thread would automatically 
 happen when a thread blocks, or upon explicit request by the 
 programmer.
 
 It has many advantages:
 
 And one disadvantage:
 
 Dan doesn't like it. :)
 
 Well, there are actually a lot of disadvantages, but that's the only 
 important one, so it's probably not worth much thought over alternate
 threading schemes for Parrot at least--it's going with an OS-level 
 preemptive threading model.
 
 No, this isn't negotiable.

More information please. 

=Austin



Re: How shall threads work in P6?

2003-03-31 Thread Matthijs van Duin
On Mon, Mar 31, 2003 at 10:50:59AM -0800, Michael G Schwern wrote:
I must write my code so each operation only takes a small fraction of time
or I must try to predict when an operation will take a long time and yield
periodicly.
Really.. why?  When you still have computation to be done before you can 
produce your output, why yield?  There are certainly scenarios where you'd 
want each thread to get a fair share of computation time, but if the 
output from all threads is desired, whoever is waiting for them probably 
won't care who gets to do computation first.

Worse, I must trust that everyone else has written their code to the above
spec and has accurately predicted when their code will take a long time.
Both this and the above can be easily solved by a timer event that forces 
a yield.  Most synchronization issues this would introduce can probably be 
avoided by deferring the yield until the next checkpoint determined by 
the compiler (say, the loop iteration)

I think this is a minor problem compared to the hurdles (and overhead!) of 
synchronization.

Cooperative multitasking is essentially syntax sugar for an event loop.
No, since all thread state is saved.  In syntax and semantics they're much 
closer to preemptive threads than to event loops.

We need good support at the very core of the langauge for preemptive 
threads.  perl5 has shown what happens when you bolt them on both 
internally and externally.  It is not something we can leave for later.
I think perl 6 will actually make it rather easy to bolt it on later.  You 
can use fork(), let the OS handle the details, and use tied variable for 
sharing.  I believe something already exists for this in p5 and is apparently 
faster than ithreads.  I haven't dug into that thing though, maybe it has 
other problems again.  No doubt you'll point 'em out for me ;-)

Cooperative multitasking, if you really want it, can be bolted on later or
provided as an alternative backend to a real threading system.
I agree it can be bolted on later, but so can preemptive threads probable.  
As Simon pointed out, optimizing for the common case means skipping threads 
altogether for now.

And I resent how you talk about non-preemptive threading as not being real 
threading.  Most embedded systems use tasking/threading models without 
round-robin scheduling, and people who try to move applications that perform 
real-time tasks from MacOS 9 to MacOS X curse the preemptive multitasking 
the latter has.

--
Matthijs van Duin  --  May the Forth be with you!


Re: How shall threads work in P6?

2003-03-31 Thread Matthijs van Duin
On Mon, Mar 31, 2003 at 01:58:19PM -0500, Dan Sugalski wrote:
Dan doesn't like it. :)

Well, there are actually a lot of disadvantages, but that's the only 
important one, so it's probably not worth much thought over alternate 
threading schemes for Parrot at least--it's going with an OS-level 
preemptive threading model.
If you can ensure me that the hooks will be available in critical routines 
(blocking operations) to allow proper implementation of cooperative threads 
in a perl module, then that's all the support from the parrot VM I need :-)

I just hope you won't make my non-preemptive-threaded applications slower 
with your built-in support for preemptive threads :-)

--
Matthijs van Duin  --  May the Forth be with you!


Re: How shall threads work in P6?

2003-03-31 Thread Matthijs van Duin
On Mon, Mar 31, 2003 at 11:58:01AM -0800, Michael G Schwern wrote:
Off-list since this tastes like it will rapidly spin out of control.
On-list since this is relevant for others participating in the discussion


Classic scenario for threading: GUI.  GUI uses my module which hasn't been
carefully written to be cooperative.  The entire GUI pauses while it waits
for my code to do its thing.  No window updates, no button pushes, no
way to *cancel the operation that's taking too long*.
OK, very true, I was more thinking of something like a server that uses 
a thread for each connection.

Luckily I already mentioned that automatic yielding is not too hard.  A 
timed that sets a yield asap flag that's tested iteration of a loop 
should work - maybe something with even less overhead can be cooked up.


I hope this is not a serious suggestion to implement preemptive threads
using fork() and tied vars.  That way ithreads lie.
Actually, ithreads are slower because they don't do copy-on-write while 
the OS usually does.

fork() moves the problem to the OS, where bright people have already spent 
a lot of time optimizing things, I hope at least ;)

I suppose how much faster it is to do things within the VM rather than 
using forked processes depends on how much IPC happens.  In your GUI 
example, the answer is: very little, only status updates.

The existing system you probably mean is POE
No, I wasn't.  I looked it up, it's called forks.

Besides, it would be silly as Dan has already said Parrot will support
preemptive multitasking and that's half the hard work done.  The other 
half is designing a good language gestalt around them.
OK, as long as it doesn't hurt performance of non-threaded apps I 
obviously have no problem with *supporting* preemptive threading, since 
they're certainly useful for some applications.  But coop threads are 
more useful in the general case - especially since they're simpler to use 
thanks to the near-lack of synchronization problems.  Simplicity is good, 
especially in a language like perl.


And I resent how you talk about non-preemptive threading as not being 
real threading.  
My biases come from being a MacOS user since System 6.  MultiFinder
nightmares.
Valid point (I'm also a long-time MacOS user), but cooperative multitasking 
isn't the same as cooperative threading.  We're talking about the scheduling 
between threads inside one process; and we can avoid the lockup problem in 
the VM with automatic yielding.

This makes most of the problems of cooperative threading disappear, while 
leaving the advantages intact.

If we want to support real-time programming in Perl
No, I was merely pointing out that it's not always a step forward for all 
applications.  Some people made good use with the ability to grab all the 
CPU time you need on old MacOS.

None of this precludes having a cooperative threading system, but we
*must* have a preemptive one.
must is a big word; people happily used computers a long time before any 
threading was used ;-)

It looks like we could use both very well though

--
Matthijs van Duin  --  May the Forth be with you!


Re: How shall threads work in P6?

2003-03-31 Thread Matthijs van Duin
On Mon, Mar 31, 2003 at 07:21:03PM +0100, Simon Cozens wrote:
[EMAIL PROTECTED] (Matthijs Van Duin) writes:
I think if we apply the Huffman principle here by optimizing for the
most common case, cooperative threading wins from preemptive threading.
Well, if you optimize for the most common case, throw out threads altogether.
Well, I almost would agree with you since cooperative threading can almost 
entirely be done in perl code, since they are built in continuations.  I 
actually gave an example of that earlier.

The only thing is that blocking system operations like file-descriptor 
operations need some fiddling.  (first try it non-blocking fd operation, if 
that fails block the thread and yield to another;  if all threads are 
blocked, so a select() or kqueue() or something similar over all fds on 
which threads are waiting)

If the hooks exist to handle this in a perl module, then I think we can 
skip the issue mostly, except maybe the question what to include with perl 
in the default installation.

--
Matthijs van Duin  --  May the Forth be with you!


Re: How shall threads work in P6?

2003-03-31 Thread Dan Sugalski
At 8:13 PM +0200 3/31/03, Matthijs van Duin wrote:
On Mon, Mar 31, 2003 at 07:45:30AM -0800, Austin Hastings wrote:
I've been thinking about closures, continuations, and coroutines, and
one of the interfering points has been threads.
What's the P6 thread model going to be?

As I see it, parrot gives us the opportunity to implement preemptive
threading at the VM level, even if it's not available via the OS.
I think we should consider cooperative threading, implemented using 
continuations.  Yielding to another thread would automatically 
happen when a thread blocks, or upon explicit request by the 
programmer.

It has many advantages:
And one disadvantage:

Dan doesn't like it. :)

Well, there are actually a lot of disadvantages, but that's the only 
important one, so it's probably not worth much thought over alternate 
threading schemes for Parrot at least--it's going with an OS-level 
preemptive threading model.

No, this isn't negotiable.
--
Dan
--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk


RFC 178 (v5) Lightweight Threads

2000-09-26 Thread Perl6 RFC Librarian

This and other RFCs are available on the web at
  http://dev.perl.org/rfc/

=head1 TITLE

Lightweight Threads

=head1 VERSION

  Maintainer: Steven McDougall [EMAIL PROTECTED]
  Date: 30 Aug 2000
  Last Modified: 26 Sep 2000
  Mailing List: [EMAIL PROTECTED]
  Number: 178
  Version: 5
  Status: Frozen

=head1 ABSTRACT

A lightweight thread model for Perl.

=over 4

=item *

All threads see the same compiled subroutines

=item *

All threads share the same global variables

=item *

Threads can create thread-local storage by Clocalizing global variables

=item *

All threads share the same file-scoped lexicals

=item *

Each thread gets its own copy of block-scoped lexicals upon execution
of Cmy

=item *

Threads can share block-scoped lexicals by passing a reference to a
lexical into a thread, by declaring one subroutine within the scope of
another, or with closures.

=item *

Open code can only be executed by a thread that compiles it

=item * 

The language guarantees atomic data access. Everything else is the
user's problem.

=back 

=over 4

=item Perl

Swiss-army chain saw

=item Perl with threads

juggling chain saws

=back

=head1 CHANGES

=head2 v5

Frozen

=head2 v4

=over 4

=item *

Traded in data coherence for LAtomic data access. Added examples 16 and
17. 

=item *

Traded in Primitive operations for LLocking

=item *

Dropped L/local section

=item *

Revised L/Performance section

=back

=head2 v3

=over 4

=item *

Simplified example 9

=item *

Added L/Performance section

=back

=head2 v2

=over 4

=item *

Added section on sharing block-scoped lexicals between threads

=item *

Added examples 9, 10, and 11. (N.B. renumbered following examples)

=item *

Fixed some typos

=back


=head1 FROZEN

There was substantial--if somewhat disjointed--discussion of thread
models on perl6-internals. The consensus among those with internals
experience is that this RFC shares too much data between threads, and
that the CPU cost of acquiring a lock for every variable access will
be prohibitive.

Dan Sugalski discussed some of the tradeoffs and sketched an alternate
threading model at

http://www.mail-archive.com/perl6-internals%40perl.org/msg01272.html

however, this has not been submitted as an RFC.


=head1 DESCRIPTION

The overriding design principle in this model is that there is one
program executing in multiple threads. One body of code; one set of
global variables; many threads of execution. I like this model because

=over 4

=item *

I understand it

=item *

It does what I want

=item *

I think it can be implemented

=back


=head2 Notation

=over 4

=item Imain and Ispawned threads

We'll call the first thread that executes in a program the Imain
thread. It isn't distinguished in any other way. All other threads are
called Ispawned threads.

=item Iopen code

Code that isn't contained in a BLOCK.

=back

Examples are written in Perl5, and use the thread programming model
documented in CThread.pm. Discussions of performance and
implementation is based on the Perl5 internals; obviously, these are
subject to change.


=head2 All threads see the same compiled subroutines

Subroutines are typically defined during the initial compilation of a
program. Cuse, Crequire, Cdo, and Ceval can later define
additional subroutines or redefine existing ones. Regardless, at any
point in its execution, a program has one and only one collection of
defined subroutines, and all threads see this collection.

Example 1

sub foo  { print 1 }
sub hack_foo { eval 'sub foo { print 2 }' }
foo();
Thread-new(\hack_foo)-join;
foo();

Output: 12. The main thread executes Cfoo; the spawned thread
redefines Cfoo; the main thread executes the redefined subroutine.


Example 2

sub foo  { print 1 }
sub hack_foo { eval 'sub foo { print 2 }' }
foo();
Thread-new(\hack_foo);
foo();

Output: 11 or 12, according as the main thread does or does not make
the second call to Cfoo() before the spawned thread redefines it. If
the user cares which happens first, then they are responsible for
doing their own synchronization, for example, with Cjoin, as shown
in Example 1.

Code refs (like all Perl data objects) are reference counted. Threads
increment the reference count upon entry to a subroutine, and
decrement it upon exit. This ensures that the op tree won't be garbage
collected while the thread is executing it.


=head2 All threads share the same global variables

Example 3

#!/my/path/to/perl
$a = 1;
Thread-new(\foo)-join;
print $a;

sub foo { $a++ }

Output: 2. C$a is a global, and it is the Isame global in both the
main thread and the spawned thread.


=head2 Threads can create thread-local storage by Clocalizing global
variables

Example 4

#!/my/path/to/perl
$a = 1;
Thread-new(\foo);
print $a;

sub foo { local $a = 2 }

Output: 1. The spawned thread gets it's own copy of C$a. The copy of
C$a in the main thread is unaffected

RFC 178 (v2) Lightweight Threads

2000-09-04 Thread Perl6 RFC Librarian

This and other RFCs are available on the web at
  http://dev.perl.org/rfc/

=head1 TITLE

Lightweight Threads

=head1 VERSION

  Maintainer: Steven McDougall [EMAIL PROTECTED]
  Date: 30 Aug 2000
  Last Modified: 02 Sep 2000
  Version: 2
  Mailing List: [EMAIL PROTECTED]
  Number: 178
  Status: Developing

=head1 ABSTRACT

A lightweight thread model for Perl.

=over 4

=item *

All threads see the same compiled subroutines

=item *

All threads share the same global variables

=item *

Threads can create thread-local storage by Clocalizing global variables

=item *

All threads share the same file-scoped lexicals

=item *

Each thread gets its own copy of block-scoped lexicals upon execution
of Cmy

=item *

Threads can share block-scoped lexicals by passing a reference to a
lexical into a thread, by declaring one subroutine within the scope of
another, or with closures.

=item *

Open code can only be executed by a thread that compiles it

=item *

The interpreter guarantees data coherence

=back 

=head1 DESCRIPTION

The overriding design principle in this model is that there is one
program executing in multiple threads. One body of code; one set of
global variables; many threads of execution. I like this model because

=over 4

=item *

I understand it

=item *

it does what I want

=item *

I think it can be implemented

=back


=head2 Notation

=over 4

=item Imain and Ispawned threads

We'll call the first thread that executes in a program the Imain
thread. It isn't distinguished in any other way. All other threads are
called Ispawned threads.

=item Iopen code

Code that isn't contained in a BLOCK.

=back

Examples are written in Perl5, and use the thread programming model
documented in CThread.pm. Discussions of performance and
implementation issues are all based on the Perl5 internals; obviously,
these are subject to change.


=head2 All threads see the same compiled subroutines

Subroutines are typically defined during the initial compilation of a
program. Cuse, Crequire, Cdo, and Ceval can later define
additional subroutines or redefine existing ones. Regardless, at any
point in its execution, a program has one and only one collection of
defined subroutines, and all threads see this collection.

Example 1

sub foo  { print 1 }
sub hack_foo { eval 'sub foo { print 2 }' }
foo();
Thread-new(\hack_foo)-join;
foo();

Output: 12. The main thread executes Cfoo; the spawned thread
redefines Cfoo; the main thread executes the redefined subroutine.


Example 2

sub foo  { print 1 }
sub hack_foo { eval 'sub foo { print 2 }' }
foo();
Thread-new(\hack_foo);
foo();

Output: 11 or 12, according as the main thread does or does not make
the second call to Cfoo() before the spawned thread redefines it. If
the user cares which happens first, then they are responsible for
doing their own synchronization, for example, with Cjoin, as shown
in Example 1.

Code refs (like all Perl data objects) are reference counted. Threads
increment the reference count upon entry to a subroutine, and
decrement it upon exit. This ensures that the op tree won't be garbage
collected while the thread is executing it.


=head2 All threads share the same global variables

Example 3

#!/my/path/to/perl
$a = 1;
Thread-new(\foo)-join;
print $a;

sub foo { $a++ }

Output: 2. C$a is a global, and it is the Isame global in both the
main thread and the spawned thread.


=head2 Threads can create thread-local storage by Clocalizing global
variables

Example 4

#!/my/path/to/perl
$a = 1;
Thread-new(\foo);
print $a;

sub foo { local $a = 2 }

Output: 1. The spawned thread gets it's own copy of C$a. The copy of
C$a in the main thread is unaffected. It doesn't matter whether the
assignment in Cfoo executes before or after the Cprint in the main
thread. It doesn't matter whether the copy of C$a goes out of scope
before or after the Cprint executes.


As in Perl5, Clocalized variables are visible to any subroutines
called while they remain in scope.

Example 5

#!/my/path/to/perl
$a = 1;
Thread-new(\foo);
bar();

sub foo 
{ 
local $a = 2;
bar();
}

sub bar { print $a }

Output: 12 or 21, depending on the order in which the calls to Cbar
execute.


Dynamic scopes are not inherited by spawned threads.

Example 6

#!/my/path/to/perl
$a = 1;
foo();
 
sub foo 
{ 
local $a = 2;
Thread-new(\bar)-join;
}

sub bar { print $a }

Output: 1. The spawned thread sees the original value of C$a.


=head2 All threads share the same file-scoped lexicals

Example 7

#!/my/path/to/perl
my $a = 1;
Thread-new(\foo)-join;
print $a;

sub foo { $a = 2 }

Output: 2. C$a is a file-scoped lexical. It is the same variable in
both the main thread and the spawned thread.


=head2 Each thread gets its own copy of block-scoped lexicals upon
execution

Re: RFC 178 (v1) Lightweight Threads

2000-09-03 Thread Steven W McDougall

 There is a fundemental issue on how values are passed between
 threads. Does the value leave one thread and enter the other or are
 they shared.

 The idea tossed around -internals was that a value that crosses a thread
 boundary would have a wrapper/proxy attached to handle the mediation.

 The mediation would be activated only if the value is passed via a
 shared variable. In your case the shared variable is the argument
 being passed through the thread creation call.
[...]
 If we don't require a :shared on variable anything and everything
 has to have protection. If you want the variable to be shared
 declare it.
[...]
 Aha, I get it. -internals has been assuming that one _must_ specify
 the sharing. You want it to be infered.

 I think that's asking for too much DWIMery.


Question: Can the interpreter determine when a variable becomes
shared?

Answer: No. Then neglecting to put a :shared attribute on a shared
variable will crash the interpreter. This doesn't seem very Perlish.

Answer: Yes. Then the interpreter can take the opportunity to install
a mutex on the variable.


- SWM



Re: RFC 178 (v1) Lightweight Threads

2000-09-02 Thread Chaim Frenkel

 "SWM" == Steven W McDougall [EMAIL PROTECTED] writes:

 Not unless it is so declared my $a :shared.

SWM Sure it is.
SWM Here are some more examples.

SWM Example 1: Passing a reference to a block-scoped lexical into a thread.

Depends on how locking/threading is designed. There is a fundemental issue
on how values are passed between threads. Does the value leave one thread
and enter the other or are they shared.

The idea tossed around -internals was that a value that crosses a thread
boundary would have a wrapper/proxy attached to handle the mediation.

The mediation would be activated only if the value is passed via a
shared variable. In your case the shared variable is the argument
being passed through the thread creation call.


SWM Example 2: Declaring one subroutine within the scope of another

If we don't require a :shared on variable anything and everything
has to have protection. If you want the variable to be shared
declare it.


SWM Example 3: Closures (Ken's example)

Aha, I get it. -internals has been assuming that one _must_ specify
the sharing. You want it to be infered.

I think that's asking for too much DWIMery.

chaim
-- 
Chaim FrenkelNonlinear Knowledge, Inc.
[EMAIL PROTECTED]   +1-718-236-0183



Re: RFC 178 (v1) Lightweight Threads

2000-08-31 Thread Steven W McDougall

 The more interesting case is this:
 
 #!/my/path/to/perl
 sub foo_generator { my $a = shift; sub { print $a++ } }
 my $foo = foo_generator(1);
 $foo-();
 Thread-new($foo);

 Is $a shared between threads or not? 

$a is shared between threads.
The anonymous subroutine is a closure. 
Closures work across threads.


 IMHO the rule is not as simple as this RFC states. (Partly because of
 confusion about "executing" my.)

It is almost as simple. I should add an example (like this one)
showing the behavior of closures. 


 Perhaps Thread-new should deep copy the code ref before executing
 it? Deep copy lexicals but not globals? Deep copy anything that doesn't
 already have a mutex lock?

no no no...perlref.pod, Making References, item 4 says

A reference to an anonymous subroutine can be created by using sub
without a subname:

 $coderef = sub { print "Boink!\n" };

 ...no matter how many times you execute that particular line
(unless you're in an eval("...")), $coderef will still have a
reference to the same anonymous subroutine.)


We can, and should, retain this behavior.


- SWM



Re: Are Perl6 threads preemptive or cooperative?

2000-08-28 Thread Dan Sugalski

At 12:11 PM 8/28/00 -0500, David L. Nicol wrote:

What if every subroutine tagged itself with a list of the globals
it uses, so a calling routine would know to add those to the list
of globals it wants locked?

If you're looking for automagic locking of variables, you're treading deep 
into "Interesting Research Problem" territory (read: Solve the Halting 
Problem and win a prize!) if you want it to not deadlock all over the place.

Been there. Tried that. Backed away *real* slowly... :)

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: Are Perl6 threads preemptive or cooperative?

2000-08-27 Thread Steven W McDougall

 That a user my need to have two or more variables in sync for proper
 operation. And cooperative threads don't address that issue.

 Cooperative only helps _perhaps_ with perl not needing to protrect its
 own structures.

We are in agreement.

I was specifically addressing the problem of protecting internal
interpreter data structures, and specifically ignoring the problem of
synchronizing user access to data.


- SWM



Re: Threads and run time/compile time

2000-08-27 Thread Chaim Frenkel

I wish I knew why you are discussing in -internals issue on this list.
You should be specifying behaviour not how it is implemented. A mention
of implementation is reasonable but _don't_ spend too much time. If Larry
wants it. -internals will give it to him.

Anyway, please recall that because of threading concerns, the final
internal form of any compiled piece of code will be as immutable as
possible. So that if another thread needs to reslurp a module, the
compiled form will be available.

Of course, some initializations would have to be rerun, but that is
minor compared to the other costs.

Remember specify _as if_ it would do X. -internals will make it so.
As fast as possible. 

chaim
(Of course some requests will not be doable and some revisitin will
have to be performed but the first cut should not be too concerned.)
c

 "SWM" == Steven W McDougall [EMAIL PROTECTED] writes:

SWM Based on your examples, I have to assume that you are serious about
SWM RFC1v3 item 6:

SWM 6. Modules should be loaded into the thread-global space when used
SWM[...]
SWMSubsequent threads should then reslurp these modules back in on 
SWMtheir start up.
SWM[...]
SWMeach thread needs to reuse the original modules upon creation.
SWM[...]
SWMThis, of course, could lead to massive program bloat


SWM This is a non-starter for me. Right now, I am working on apps that may
SWM create 10 threads per *second*. I cannot possibly get the performance
SWM I need if every thread has to recompile all its own modules.

SWM We could either discuss alternate approaches for RFC1, or I could
SWM submit a new RFC for a thread architecture that gives me the
SWM performance I want.
-- 
Chaim FrenkelNonlinear Knowledge, Inc.
[EMAIL PROTECTED]   +1-718-236-0183



Re: Are Perl6 threads preemptive or cooperative?

2000-08-25 Thread Markus Peter

--On 25.08.2000 20:02 Uhr -0400 Steven W McDougall wrote:

 Others have pointed out that code inside sub-expressions and blocks
 could also assign to our variables. This is true, but it isn't our
 problem. As long as each assignment is carried out correctly by the
 interpreter, then each variable always has a valid value, computed
 from other valid values.

Depends on who 'our' is, if you're an internals guy you need not care, 
that's true, but as a user of the language you care about how much sync-ing 
you have to do yourself in your perl code - the preemptive vs. cooperative 
discussion is there valid as well though it would probably be good to 
seperate these discussions :-)

-- 
Markus Peter - SPiN GmbH
[EMAIL PROTECTED]




Threads and file-scoped lexicals

2000-08-25 Thread Steven W McDougall

Do separate threads 
- all see the same file-scoped lexicals
- each get their own file-scoped lexicals


#!/usr/local/bin/perl

use Threads;

my $a = 0;

my $t1 = Thread-new(\inc_a);
my $t2 = Thread-new(\inc_a);

$t1-join;
$t2-join;

print "$a";

sub inc_a
{
$a++;
}

What should the output be? 0? 1? 2?


- SWM



Threads and run time/compile time

2000-08-25 Thread Steven W McDougall

RFC1v3 says

5. Threads are a runtime construct only. Lexical scoping and
compile issues are independent of any thread boundaries. The
initial interpretation is done by a single thread. use Threads may
set up the thread constructs, but threads will not be spawned
until runtime.

However, the distinction between compile time and run time that it
relies on doesn't exist in Perl. For example, if we chase through
perlfunc.pod a bit, we find

use Module;

is exactly equivalent to

BEGIN { require Module; import Module; }
and
require Module;

locates Module.pm and does a

do Module.pm

which is equivalent to

scalar eval `cat Module.pm`;

and eval is documented as

 eval EXPR
   
 the return value of EXPR is parsed and
 executed as if it were a little Perl program.


Users can (and do) write open code in modules.
There is nothing to prevent users from writing modules like

# Module.pm
use Threads;

Thread-new(\foo);

sub foo { ... }


Users can also write their own BEGIN blocks to start threads before
the rest of the program has been compiled

sub foo { ... }

BEGIN { Thread-new(\foo) }


Going in the other direction, users can write

require Foo.pm

or even

eval "sub foo { ... }";

to compile code after the program (and other threads) have begun execution.


Given all this, I don't think we can sequester thread creation into
"run time". We need a model that works uniformly, no matter when
threads are created and run.


- SWM






Re: RFC 86 (v1) IPC Mailboxes for Threads and Signals

2000-08-11 Thread Uri Guttman

 "NI" == Nick Ing-Simmons [EMAIL PROTECTED] writes:

  NI Uri Guttman [EMAIL PROTECTED] writes:

   i think we do because a thread can block on a mailbox while it can't on
   an array. 

  NI Why not ? - I see no reason that a "shared" array could not have 
  NI whatever-it-takes to allow blocking.

then every op that could touch an array has to have code to support
blocking. i think that would be a mess. and what is the definition of
blocking on an array, is it empty? can i pop or shift it? how do you
handle atomicity? how do you specify a non-blocking access (poll) on an array?

mailboxes are defined to work fine with those requirements. a get on a
mailbox will block until one item is retrieved and that is an atoamic
operation. a get can be made non-blocking (polling) with a optional
argument.

uri

-- 
Uri Guttman  -  [EMAIL PROTECTED]  --  http://www.sysarch.com
SYStems ARCHitecture, Software Engineering, Perl, Internet, UNIX Consulting
The Perl Books Page  ---  http://www.sysarch.com/cgi-bin/perl_books
The Best Search Engine on the Net  --  http://www.northernlight.com



Re: RFC 86 (v1) IPC Mailboxes for Threads and Signals

2000-08-11 Thread Uri Guttman

 "DS" == Dan Sugalski [EMAIL PROTECTED] writes:

  DS Nope. The code that accessses the array needs to support it. Different 
  DS animal entirely. The ops don't actually need to know.

but still that is overhead code for all arrays and not just the mailbox
ones.

  DS s/mailboxes/filehandles/;

  DS If we're talking a generic communication pipe between things, we
  DS should overload the filehandle. It's a nice construct that
  DS provides an ordered, serialized, blockable, pollable
  DS communications channel with well-defined behavior and a
  DS comfortable set of primitives to operate on it.

pollable is a good thing. some mailbox designs are not pollable and some
are. i like the idea of supporting polling then you can also have
callbacks. but this does imply an implementation as semaphores and
shared memory are not pollable. you would have to build this with pipes
and filehandles.

overlaying it on filehandles is another question. i would like to see a
single operation which does an atomic lock, block, retrieve, unlock. we
don't have that for filehandles.  you could use a new method on that
special handle (i like 'get') which has the desired semantics.

i think making mailboxes in some form is a good idea. but they should be
special objects (even if they are filehandles) with their own methods to
support the desired semantics.

uri

-- 
Uri Guttman  -  [EMAIL PROTECTED]  --  http://www.sysarch.com
SYStems ARCHitecture, Software Engineering, Perl, Internet, UNIX Consulting
The Perl Books Page  ---  http://www.sysarch.com/cgi-bin/perl_books
The Best Search Engine on the Net  --  http://www.northernlight.com



Re: RFC 86 (v1) IPC Mailboxes for Threads and Signals

2000-08-11 Thread Dan Sugalski

At 12:54 PM 8/11/00 -0400, Uri Guttman wrote:
  "DS" == Dan Sugalski [EMAIL PROTECTED] writes:

   DS Nope. The code that accessses the array needs to support it. Different
   DS animal entirely. The ops don't actually need to know.

but still that is overhead code for all arrays and not just the mailbox
ones.

Nope. Just for the shared ones.

   DS s/mailboxes/filehandles/;

   DS If we're talking a generic communication pipe between things, we
   DS should overload the filehandle. It's a nice construct that
   DS provides an ordered, serialized, blockable, pollable
   DS communications channel with well-defined behavior and a
   DS comfortable set of primitives to operate on it.

pollable is a good thing. some mailbox designs are not pollable and some
are. i like the idea of supporting polling then you can also have
callbacks. but this does imply an implementation as semaphores and
shared memory are not pollable. you would have to build this with pipes
and filehandles.

So? Inter-thread communication is almost undoubtedly not going to be built 
with something as heavyweight as pipes, shm, or mailboxes, so I don't see 
their limitations as relevant here. Regardless, don't design to the 
limitations of one particular implementation method. We can work around 
their limits if need be.

overlaying it on filehandles is another question. i would like to see a
single operation which does an atomic lock, block, retrieve, unlock. we
don't have that for filehandles.  you could use a new method on that
special handle (i like 'get') which has the desired semantics.

So we enhance filehandles to make reads on them atomic.  does an atomic 
read on the filehandle. NBD.

i think making mailboxes in some form is a good idea. but they should be
special objects (even if they are filehandles) with their own methods to
support the desired semantics.

Overload filehandles. They really are a good fit for what you're looking for.

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: RFC 86 (v1) IPC Mailboxes for Threads and Signals

2000-08-10 Thread Uri Guttman

 "CF" == Chaim Frenkel [EMAIL PROTECTED] writes:

  CF How does this look different from an inter-thread visible array
  CF treated as a queue?

  CF   Thread A
  CF   push(@workqueue, $val)

  CF   Thread B
  CF   $val = pop(@workqueue)

  CF Where accessing the global variable is guaranteed by perl to be atomic.

  CF (i.e. Do we need another construct?)

i think we do because a thread can block on a mailbox while it can't on
an array. also the mailbox idea allows delivery of signals and other
asynch callbacks so it serves double duty. the idea is that the core
manages it instead of the application. it has a builtin mutex so you
don't have to use one or declare somthign shared. it does not stop you
from sharing stuff but it provides an core level interface for
comunications.

uri

-- 
Uri Guttman  -  [EMAIL PROTECTED]  --  http://www.sysarch.com
SYStems ARCHitecture, Software Engineering, Perl, Internet, UNIX Consulting
The Perl Books Page  ---  http://www.sysarch.com/cgi-bin/perl_books
The Best Search Engine on the Net  --  http://www.northernlight.com