Re: SV: Parrot multithreading?

2001-10-01 Thread Dan Sugalski

At 04:15 PM 9/30/2001 -0400, Sam Tregar wrote:
On Sun, 30 Sep 2001, Nick Ing-Simmons wrote:

  The main problem with perl5 and threads is that threads are an 
 afterthought.

Which, of course, also goes for UNIX and threads and C and threads.
It's good for us to be thinking about as early as possible but it's no
garauntee that there won't be big problems anyway.  Extensions in
C come to mind...

If they follow the rules, things'll be fine. We'll make sure it's all laid 
out clearly.

Has anything come down from the mountain about the future of XS in Perl 6?
Speaking of which, what's taking Moses so long?

Work, life... y'know, the standard stuff. :)

Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: SV: Parrot multithreading?

2001-10-01 Thread Dan Sugalski

At 09:23 AM 10/1/2001 -0400, Michael Maraist wrote:
   Just because parrot knows what functions can croak, it
   doesn't mean that
   it can possibly know which locks have been taken out all
   the way back up
   the stack between the call to longjmp and the
   corresponding setjmp. And,
   under your scheme we would potentially end up with two
   copies of every
   utility function - one croak_safe and one croak_unsafe.
 
  Not very likely - the only reason I can find for most
  utility functions (other than possibly string coercions) to
  fail is either panic(out of memory!) or panic(data
  structures hopelessly confused!) (or maybe panic(mutexes
  not working!)) - anything likely to throw a programmatic
  exception would be at the opcode level, and so not be open
  to being called by random code.

The perl6 high-level description currently sugests that op-codes can 
theoretically
be written in perl.  Perhaps these are only second-class op-codes
(switched off a single user-defined-op-code), but that suggests that
the good ole die/croak functionality will be desired.

Sure, but that's no problem. Things should propagate up those code streams 
the way they do any other.

Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




RE: SV: Parrot multithreading?

2001-09-29 Thread Michael Maraist

  or have entered a muteX,

 If they're holding a mutex over a function call without a
 _really_ good reason, it's their own fault.

General perl6 code is not going to be able to prevent someone from
calling code that in-tern calls XS-code.  Heck, most of what you do in
perl involves some sort of function call (such as stringifying).

whatever solution is found will probably have to deal with exceptions
/ events within a mutex.


That said, there's no reason why we can't have _all_ signal handler
code be:

void sig_handler:
  interp-signal=X;

This would just require some special handling within XS I suspect.

-Michael




Re: SV: Parrot multithreading?

2001-09-29 Thread Benjamin Stuhl

--- Alan Burlison [EMAIL PROTECTED] wrote:
 
   or have entered a mutex,
  
  If they're holding a mutex over a function call without
 a
  _really_ good reason, it's their own fault.
 
 Rubbish.  It is common to take out a lock in an outer
 functions and then
 to call several other functions under the protection of
 the lock.

Let me be more specific: if you're holding a mutex over a
call back into parrot, it's your own fault. Parrot itself
knows which functions may croak() and which won't, so it
can use utility funtions that return a status in places
where it'd be unsafe to croak(). (And true panics probably
should not be croak()s the way they are in perl5 - there's
not much an application can do with Bizarre copy of
ARRAY)
 
The alternative is that _every_ function simply
 return
   a status, which
is fundamentally expensive (your real retval has to
 be
   an out
parameter, to start with).
 
 Are we talking 'expensive in C' or 'expensive in parrot?'

Expensive in C (wasted memory bandwidth, code bloat -
cache waste), which translates to a slower parrot.

  It is also slow, and speed is priority #1.
 
 As far as I'm aware, trading correctness for speed is not
 an option.

This is true, which is why I asked if there were any
platforms that have a nonfunctional (set|long)jump.

-- BKS

__
Do You Yahoo!?
Listen to your Yahoo! Mail messages from any phone.
http://phone.yahoo.com



Re: SV: Parrot multithreading?

2001-09-28 Thread David M. Lloyd

On Fri, 28 Sep 2001, Alan Burlison wrote:

 Arthur Bergman wrote:

  longjmp in a controlled fashion isn't thread-safe? Or longjmping while
  holding mutexs and out from asynchronous handlers is not thread-safe?

 Arthur It *may* be possible to use longjmp in threaded programs in a
 restricted fashion on some platforms.  However if you use it on
 Solaris, for example, where we don't commit to it being thread-safe
 and it breaks - tough.  This includes breakage introduced by either
 new patches or new OS releases, as we haven't committed to it being
 thread-safe in the first place.

This raises another issue:  Is the Perl_croak() thing going to stay
around?  As far as I can tell, this uses siglongjmp.  I personally can't
think of any other way to do this type of exception handling in C, so
either we don't use croak(), find another way to do it, or just deal with
the potential problems.

- D

[EMAIL PROTECTED]




Re: SV: Parrot multithreading?

2001-09-28 Thread Dan Sugalski

At 01:03 PM 9/28/2001 -0500, David M. Lloyd wrote:
On Fri, 28 Sep 2001, Alan Burlison wrote:

  Arthur Bergman wrote:
 
   longjmp in a controlled fashion isn't thread-safe? Or longjmping while
   holding mutexs and out from asynchronous handlers is not thread-safe?
 
  Arthur It *may* be possible to use longjmp in threaded programs in a
  restricted fashion on some platforms.  However if you use it on
  Solaris, for example, where we don't commit to it being thread-safe
  and it breaks - tough.  This includes breakage introduced by either
  new patches or new OS releases, as we haven't committed to it being
  thread-safe in the first place.

This raises another issue:  Is the Perl_croak() thing going to stay
around?  As far as I can tell, this uses siglongjmp.  I personally can't
think of any other way to do this type of exception handling in C, so
either we don't use croak(), find another way to do it, or just deal with
the potential problems.

Croak's going to throw an interpreter exception. There's a little bit of 
documentation about the exception handling opcodes in 
docs/parrot_assembly.pod, with more to come soonish.

Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: SV: Parrot multithreading?

2001-09-28 Thread Benjamin Stuhl

Thus did the Illustrious Dan Sugalski [EMAIL PROTECTED]
write:
 Croak's going to throw an interpreter exception. There's
 a little bit of 
 documentation about the exception handling opcodes in 
 docs/parrot_assembly.pod, with more to come soonish.

This is fine at the target language level (e.g. perl6,
python, jako, whatever), but how do we throw catchable
exceptions up through six or eight levels of C code?
AFAICS, this is more of why perl5 uses the JMP_BUF stuff -
so that XS and functions like sv_setsv() can Perl_croak()
without caring about who's above them in the call stack.
The alternative is that _every_ function simply return a
status, which is fundamentally expensive (your real retval
has to be an out parameter, to start with).

-- BKS

__
Do You Yahoo!?
Listen to your Yahoo! Mail messages from any phone.
http://phone.yahoo.com



RE: SV: Parrot multithreading?

2001-09-28 Thread Hong Zhang


  This is fine at the target language level (e.g. perl6, python, jako,
  whatever), but how do we throw catchable exceptions up through six or
  eight levels of C code? AFAICS, this is more of why perl5 uses the
  JMP_BUF stuff - so that XS and functions like sv_setsv() can
  Perl_croak() without caring about who's above them in the call stack.
 
 This is my point exactly.

This is the wrong assumption. If you don't care about the call stack, 
how can you expect the [sig]longjmp can successfully unwind stack?
The caller may have a malloc memory block, or have entered a mutex,
or acquire the file lock of Perl cvs directory. You probably have
to call Dan or Simon for the last case.

 The alternative is that _every_ function simply return a status, which
 is fundamentally expensive (your real retval has to be an out
 parameter, to start with).

This is the only right solution generally. If you really really really
know everything between setjmp and longjmp, you can use it. However,
the chance is very low.

 To answer my own question (at least, with regards to Solaris), the
 attributes(5) man page says that 'Unsafe' is defined thus:
 
  An Unsafe library contains global and static data that is not
  protected.  It is not safe to use unless the application arranges for
  only one thread at time to execute within the library. Unsafe
  libraries may contain routines that are Safe;  however, most of the
  library's routines are unsafe to call.
 
 This would imply that in the worst case (at least for Solaris) we could
 just wrap calls to [sig]setjmp and [sig]longjmp in a mutex.  'croak'
 happens relatively infrequently anyway.

This is not the point. The [sig]setjmp and [sig]longjmp are generally
safe outside signal handler. Even they are not safe, we can easily
write our own thread-safe version using very small amount of assembly
code. The problem is they can not be used inside signal handler under
MT, and it is (almost) impossible to write a thread-safe version.

Hong



RE: SV: Parrot multithreading?

2001-09-28 Thread Benjamin Stuhl

--- Hong Zhang [EMAIL PROTECTED] wrote:
 
   This is fine at the target language level (e.g.
 perl6, python, jako,
   whatever), but how do we throw catchable exceptions
 up through six or
   eight levels of C code? AFAICS, this is more of why
 perl5 uses the
   JMP_BUF stuff - so that XS and functions like
 sv_setsv() can
   Perl_croak() without caring about who's above them in
 the call stack.
  
  This is my point exactly.
 
 This is the wrong assumption. If you don't care about the
 call stack, 
 how can you expect the [sig]longjmp can successfully
 unwind stack?
 The caller may have a malloc memory block, 

Irrelevant with a GC.

 or have entered a mutex,

If they're holding a mutex over a function call without a
_really_ good reason, it's their own fault.

 or acquire the file lock of Perl cvs directory. You
 probably have
 to call Dan or Simon for the last case.
 
  The alternative is that _every_ function simply return
 a status, which
  is fundamentally expensive (your real retval has to be
 an out
  parameter, to start with).
 
 This is the only right solution generally. If you really
 really really
 know everything between setjmp and longjmp, you can use
 it. However,
 the chance is very low.

It is also slow, and speed is priority #1.

[snip, snip]
 code. The problem is they can not be used inside signal
 handler under
 MT, and it is (almost) impossible to write a thread-safe
 version.

Signals are an event, and so don't need jumps. Under MT,
it's not like there would be a lot of contention for
PAR_jump_lock.

-- BKS

__
Do You Yahoo!?
Listen to your Yahoo! Mail messages from any phone.
http://phone.yahoo.com



RE: SV: Parrot multithreading?

2001-09-28 Thread Hong Zhang

  This is the wrong assumption. If you don't care about the call stack, 
  how can you expect the [sig]longjmp can successfully unwind stack?
  The caller may have a malloc memory block, 
 
 Irrelevant with a GC.

Are you serious? Do you mean I can not use malloc in my C code?

  or have entered a mutex,
 
 If they're holding a mutex over a function call without a
 _really_ good reason, it's their own fault.

If you don't care about caller, why the caller cares about you?
Why the callers need to present their reason for locking a
mutex? You ask too much.

  or acquire the file lock of Perl cvs directory. You
  probably have
  to call Dan or Simon for the last case.
  
   The alternative is that _every_ function simply return
  a status, which
   is fundamentally expensive (your real retval has to be
  an out
   parameter, to start with).
  
  This is the only right solution generally. If you really
  really really
  know everything between setjmp and longjmp, you can use
  it. However,
  the chance is very low.
 
 It is also slow, and speed is priority #1.

If so, just use C, which does not check nothing.

 Signals are an event, and so don't need jumps. Under MT,
 it's not like there would be a lot of contention for
 PAR_jump_lock.

Show me how to convert SIGSEGV to event. Please read previous
messages. Some signals are events, some are not.

Hong



Re: SV: Parrot multithreading?

2001-09-28 Thread Alan Burlison


  or have entered a mutex,
 
 If they're holding a mutex over a function call without a
 _really_ good reason, it's their own fault.

Rubbish.  It is common to take out a lock in an outer functions and then
to call several other functions under the protection of the lock.

   The alternative is that _every_ function simply return
  a status, which
   is fundamentally expensive (your real retval has to be
  an out
   parameter, to start with).

Are we talking 'expensive in C' or 'expensive in parrot?'

 It is also slow, and speed is priority #1.

As far as I'm aware, trading correctness for speed is not an option.

-- 
Alan Burlison
--
$ head -1 /dev/bollocks
effectively incubate innovative network infrastructures



Re: SV: Parrot multithreading?

2001-09-28 Thread Alan Burlison

Benjamin Stuhl wrote:

 Again, having a GC makes things easier - we clean up
 anything we lost in the GC run. If they don't actually work
 (are there any platforms where they don't work?), we can
 always write our own ;-).

I eagerly await your design for a mutex and CV garbage collector.

-- 
Alan Burlison
--
$ head -1 /dev/bollocks
systematically coordinate e-business transactional integrity



Re: SV: Parrot multithreading?

2001-09-28 Thread Dan Sugalski

At 11:56 PM 9/28/2001 +0100, Alan Burlison wrote:

   or have entered a mutex,
 
  If they're holding a mutex over a function call without a
  _really_ good reason, it's their own fault.

Rubbish.  It is common to take out a lock in an outer functions and then
to call several other functions under the protection of the lock.

And every vtable function on shared variables has the potential to aquire a 
mutex. Possibly (probably) more than one.

  It is also slow, and speed is priority #1.

As far as I'm aware, trading correctness for speed is not an option.

No, it isn't.

Short answer, longjmp is out. If we can find a way to use it, or something 
like it, safely on some platforms we might, but otherwise no.

Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: Parrot multithreading?

2001-09-25 Thread Bart Lateur

On Thu, 20 Sep 2001 14:04:43 -0700, Damien Neil wrote:

On Thu, Sep 20, 2001 at 04:57:44PM -0400, Dan Sugalski wrote:
 For clarification: do you mean async I/O, or non-blocking I/O?
 
 Async. When the interpreter issues a read, for example, it won't assume the 
 read completes immediately.

That sounds like what I would call non-blocking I/O. 

Nonono. Nonblocking IO returns immediately. Async IO lets the
interpreter go on with another thread, until the read is done.

-- 
Bart.



Re: SV: Parrot multithreading?

2001-09-25 Thread Bryan C . Warnock

 On Monday 24 September 2001 11:54 am, Dan Sugalski wrote:
  Odds are you'll get per-op event checking if you enable debugging, since
  the debugging oploop will really be a generic check event every op
  loop that happens to have the pending debugging event bit permanently
  set. Dunno whether we want to force this at compile time or consider
  some way to set it at runtime. I'd really like to be able to switch
  oploops dynamically, but I can't think of a good way to do that
  efficiently.

On a side note, back when I was doing some of my initial benchmarking, I 
came up with this solution to the opcode loop / event check condundrum: 
eventless events.  (An attempt to integrate opcodes, events, and priorities.)
For those that want the executive summary, it worked, but was so slow (slow 
as in measured-in-multiples-rather-than-percentages slow) that I never 
pursued it further.  (Particularly because checking a flag is so relatively 
inexpensive, really.)

Currently, the DO_OP loop is essentially a 1x1 table for opcode dispatch. 
(By 1x1, I mean one priority level, one pending opcode deep.)  Events are a 
completely separate beast.  

So I elected to abstract an event as a set series of opcodes that run at a 
given priority, as would be referenced (basically) by the head of that 
particular branch of the opcode tree.  I set an arbitrary number of (and 
meaning to) priorities, from signals to async i/o to user-defined callbacks.

To remove the last vestige of distinction between regular opcodes and 
events, I abstracted regular code as a single event that ran at the lowest 
priority.  (Or the next-to-lowest.  I was contemplating, at one point, 
having BEGIN, INIT, CHECK, and END blocks implemented in terms of priority.) 
So now every opcode stream is an event, or every event is an opcode stream; 
depending on how you care to look at it.

So now you have an 'p' x 1 table for opcode dispatch, where 'p' is the  
different possible run-levels within the interpreter, with one pending 
opcode (branch head) per runlevel.

But, of course, you can have pending events.  Giving our (Uri, Dan, Simon, 
and I - way back at Uri's BOF at the OSCon) previous agreement that 
events at a given priority shouldn't preempt an already scheduled event at 
that priority, we needed a way to queue events so that they were lost, but 
would still be processed at the correct time (according to our scheduler).  
So I lengthened the width of the table to handle 'e' events.

I've now an 'p' x 'e' table.  (Implemented as an array ['p'] of linked lists 
['e'].)  Now to offload the event overhead onto the events themselves.

Each interpreter has its current priority available.  The DO_OP loop uses 
that priority as the offset into the dispatch table (up the 'p' axis).  The 
first opcode in the list is what gets executed.  That opcode, in turn, then 
updates itself (the table entry) to point to the next opcode within the 
particular event.

When a new event arrives, it appends its branch head to the priority list, 
and repoints the interpreter's current priority if it is now the highest.  
(This, in effect, suspends the current opcode stream, and the DO-OP loop 
begins processing the higher-level code immediately.  When regular 
processing resumes, it picks up more or less exactly from where it left off.)

When the event exits, it deletes its own node in the linked list, and, if 
it were the last branch at that priority,  repoints the current priority to 
the next highest priority that needs to be processed.  It took a 
while to come up with the necessary incantations to Do The Right Thing when 
the priority switchers were themselves interrupted by an event at a higher, 
lower, or identical priority to the one that was just leaving.

Sure, events were a lot hairier themselves than how they currently look, but 
events and prioirties are still rather non-existent on paper - who knows how 
hairy they may become to work properly.  Besides, cleaning up the opcode 
dispatch itself was supposed to make up the difference.

For those of you playing along at home, I'm sure you obviously see why 
*that's* not the case.  Testing equality is one of the more efficient 
processor commands; more so when testing for non-zero (on machines that have 
a zero-register, or support a test for non-zero).  Which is all a check 
against an event flag would do.  Instead, I replaced it with doubly 
indirected pointer deferencing, which is not only damn inefficient (from a 
memory, cache, and paging perspective), but also can't be optimized into 
something less heinous.

An oft-mentioned (most recently by Simon on language-dev) lament WRT Perl 6 
is the plethora of uninformed-ness from contributors.  So I am now informed. 
And so are you, if you weren't already.

-- 
Bryan C. Warnock
[EMAIL PROTECTED]



Re: SV: Parrot multithreading?

2001-09-24 Thread Uri Guttman

 DS == Dan Sugalski [EMAIL PROTECTED] writes:

   do we always emit one in
   loops?

  DS At least one per statement, probably more for things like regexes.

   what about complex conditional code? i don't think there is an
   easy way to guarantee events are checked with inserted op codes. doing
   it in the op loop is better for this.

  DS I'd agree in some cases, but I don't think it'll be a big problem
  DS to get things emitted properly. (It's funny we're arguing exactly
  DS opposite positions than we had not too long ago... :)

true!

then what about a win/win? we could make the event checking style a
compile time option. an event pragma will set it to emit op codes, or
check in the op loop or do no checking in the loop but have an main
event loop. we need 2 or 3 variant op loops for that (very minor
variants) and some minor compile time conditions. i just like to be able
to offer control to the coder. we can make the emit event checks version
the default as that will satisfy the most users with the least trouble.

uri

-- 
Uri Guttman  -  [EMAIL PROTECTED]  --  http://www.sysarch.com
SYStems ARCHitecture and Stem Development -- http://www.stemsystems.com
Search or Offer Perl Jobs  --  http://jobs.perl.org



Re: SV: Parrot multithreading?

2001-09-24 Thread Michael Maraist


 Odds are you'll get per-op event checking if you enable debugging, since
 the debugging oploop will really be a generic check event every op loop
 that happens to have the pending debugging event bit permanently set.
 Dunno whether we want to force this at compile time or consider some way to
 set it at runtime. I'd really like to be able to switch oploops
 dynamically, but I can't think of a good way to do that efficiently.

If you're looking to dynamically insert statis checks every op, then
that sounds like picking a different runops function.  We've already got a
trace varient.  We could farm out a couple of these and have execution
flags specify which one to use.  If you wanted every 5'th op to check
flags, you could trivially do:

while(code) {
  DO_OP(..)

  if(code) DO_OP(..)

  if(code) DO_OP(..)

  if(code) DO_OP(..)

  if(code) DO_OP(..)

  CHECK_EVENTS(interp)
}

The inner loop is a little bigger, but aside from cache-issues, has no
performance overhead.  This would prevent having to interleave check-ops
everywhere (more importantly, it would reduce the complexity of the
compiler which would have to garuntee the injection of check-events inside
all code-paths (especially for complex flow-control like last FOO.
You could use asynchronous timers to set various flags in the check-events
section (such as gc every so-often).  Of course this requires using a more
sophisticated alarm/sleep control system than the simple wrapper around
alarm/sleep and $SIG{X}, etc.

Other methods might be whenever a dynamic variable referencee is
reassigned / derefed, an event flag is set to Q the gc, etc.

-Michael




Re: SV: Parrot multithreading?

2001-09-24 Thread Michael Maraist

 then what about a win/win? we could make the event checking style a
 compile time option.

 Odds are you'll get per-op event checking if you enable debugging, since
 the debugging oploop will really be a generic check event every op loop
 that happens to have the pending debugging event bit permanently set.
 Dunno whether we want to force this at compile time or consider some way to
 set it at runtime. I'd really like to be able to switch oploops
 dynamically, but I can't think of a good way to do that efficiently.


long-jump!!!

runops(bla bla){
  setjmp(..);
  switch(flags) {
fast_runops(bla bla);
debug_runops(bla bla);
trace_runops(bla bla);
conservative_runops(bla bla);
thread_safe_runops(bla bla);
  }
}

AUTO_OP sys_opcode_change_runops {
  bla bla
  set run-flags..
  longjmp(..)
}

In C++ I'd say throw the appropriate exception, but this is close enough.

This would work well for fake-threads too, since each thread might have a
different desired main-loop.  You'd have to do something like this if you
transitioned bewteen non-threaded and threaded anyway.

-Michael




Re: SV: Parrot multithreading?

2001-09-24 Thread Dan Sugalski

At 12:27 PM 9/24/2001 -0400, Michael Maraist wrote:
  then what about a win/win? we could make the event checking style a
  compile time option.
 
  Odds are you'll get per-op event checking if you enable debugging, since
  the debugging oploop will really be a generic check event every op loop
  that happens to have the pending debugging event bit permanently set.
  Dunno whether we want to force this at compile time or consider some way to
  set it at runtime. I'd really like to be able to switch oploops
  dynamically, but I can't think of a good way to do that efficiently.
 

long-jump!!!

I did say *good* way... :)

This would work well for fake-threads too

We're not doing fake threads. Luckily we don't need it for real ones.


Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: SV: Parrot multithreading?

2001-09-24 Thread Uri Guttman

 DS == Dan Sugalski [EMAIL PROTECTED] writes:

   then what about a win/win? we could make the event checking style a
   compile time option.

  DS Odds are you'll get per-op event checking if you enable debugging,
  DS since the debugging oploop will really be a generic check event
  DS every op loop that happens to have the pending debugging event
  DS bit permanently set.  Dunno whether we want to force this at
  DS compile time or consider some way to set it at runtime. I'd really
  DS like to be able to switch oploops dynamically, but I can't think
  DS of a good way to do that efficiently.

hmmm. what about a special op that implements another form of op loop?
the overhead is almost nil (one op call). the called op loop can run
forever or decide to return and then the parent op loop takes over
again.

this would be very cool for event loop management. you could force a
scan of event explicitly by making a call to a event flag checking loop
when you feel like it in some large crunching code. similarly, you could
enable a debug/trace/event flag loop explicitly at run time. we would
need some form of language support for this but is it nothing odd. just
a special var or call that selects a loop type. the parrot code
generated is just the op loop set function. it could be block scoped or
global (which means all code/calls below this use it).

uri

-- 
Uri Guttman  -  [EMAIL PROTECTED]  --  http://www.sysarch.com
SYStems ARCHitecture and Stem Development -- http://www.stemsystems.com
Search or Offer Perl Jobs  --  http://jobs.perl.org



Re: SV: Parrot multithreading?

2001-09-24 Thread David M. Lloyd

On Mon, 24 Sep 2001, Uri Guttman wrote:

then what about a win/win? we could make the event checking style a
compile time option.

   DS Odds are you'll get per-op event checking if you enable debugging,
   DS since the debugging oploop will really be a generic check event
   DS every op loop that happens to have the pending debugging event
   DS bit permanently set.  Dunno whether we want to force this at
   DS compile time or consider some way to set it at runtime. I'd really
   DS like to be able to switch oploops dynamically, but I can't think
   DS of a good way to do that efficiently.

 hmmm. what about a special op that implements another form of op loop?
 the overhead is almost nil (one op call). the called op loop can run
 forever or decide to return and then the parent op loop takes over
 again.

This type of approach could be implemented in an extension module, could
it not?  Because of the current flexible design of Parrot, we don't have
to implement this type of opcode into the core any more than, say fork. Do
we?

- D

[EMAIL PROTECTED]




Re: SV: Parrot multithreading?

2001-09-24 Thread Bryan C . Warnock

On Monday 24 September 2001 11:54 am, Dan Sugalski wrote:
 Odds are you'll get per-op event checking if you enable debugging, since
 the debugging oploop will really be a generic check event every op loop
 that happens to have the pending debugging event bit permanently set.
 Dunno whether we want to force this at compile time or consider some way
 to set it at runtime. I'd really like to be able to switch oploops
 dynamically, but I can't think of a good way to do that efficiently.

Embed (them) within an outer loop (function).  Program end would propogate 
the finish.  Otherwise, simply redirect to a new runops routine.  
Potentially increases the call-stack by one, but performance hit only occurs 
during the switch.  Or you could collapse it all, if you have a fixed 
number, into a switch.  

runops ( ... ) 
{
run_ops_t run_ops_type= BLUE_MOON;

while (opcode != END) {

switch (run_ops_type) {

/* I want those events checked... */
case (YESTERDAY) {
while (opcode == VALID) { DO_OP1() } break;
}

/* Check the events every... */
case (NOW_AND_THEN) {
while (opcode == VALID) { DO_OP2() } break;
}

/* Look for an event once in a... */
case (BLUE_MOON) {
while (opcode == VALID) { DO_OP3() } break;
}

/* I'll check for an event when... */
case (HELL_FREEZES_OVER) {
while (opcode == VALID) { DO_OP4() } break;
}
}
run_ops_type = new_runops_loop(I,opcode);
}
/* yada yada yada */
}
  


-- 
Bryan C. Warnock
[EMAIL PROTECTED]



Re: SV: Parrot multithreading?

2001-09-21 Thread Dan Sugalski

At 09:07 PM 9/20/2001 -0400, Uri Guttman wrote:
  DS == Dan Sugalski [EMAIL PROTECTED] writes:


   DS There probably won't be any. The current thinking is that since
   DS the ops themselves will be a lot smaller, we'll have an explicit
   DS event checking op that the compiler will liberally scatter through
   DS the generated code. Less overhead that way.

we talked about that solution before and i think it has some
problems. what if someone writes a short loop. will it generate enough
op codes that a check_event one is emitted?

The compiler will make sure, yes.

do we always emit one in
loops?

At least one per statement, probably more for things like regexes.

what about complex conditional code? i don't think there is an
easy way to guarantee events are checked with inserted op codes. doing
it in the op loop is better for this.

I'd agree in some cases, but I don't think it'll be a big problem to get 
things emitted properly. (It's funny we're arguing exactly opposite 
positions than we had not too long ago... :)


Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: Parrot multithreading?

2001-09-20 Thread Dan Sugalski

At 03:59 PM 9/20/2001 +0200, Arthur Bergman wrote:
While it has been decided that perl should be using ithread like 
threading, I guess that is irelevant at the parrot level. Are you
going to have one virtual cpu per thread with it's own set of registers 
or are you going to context switch the virtual cpu?

What we're going to do is fire up a new interpreter for each thread. (We 
may have a pool of prebuilt interpreters hanging around for this 
eventuality) Threading *is* essential at the parrot level, and there are 
even a few (as yet undocumented) opcodes to deal with them, and some stuff 
that's an integral part of the variable vtable code to deal with it. 
Whether it's considered ithread-like or not's up in the air--it'll probably 
look a lot like a mix.

I'm also seriously considering throwing *all* PerlIO code into separate 
threads (one per file) as an aid to asynchrony.

Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: Parrot multithreading?

2001-09-20 Thread Uri Guttman

 AB == Arthur Bergman [EMAIL PROTECTED] writes:

  AB In an effort to rest my braine from a coredumping perl5 I started
  AB to think a bit on threading under parrot?  While it has been
  AB decided that perl should be using ithread like threading, I guess
  AB that is irelevant at the parrot level. Are you going to have one
  AB virtual cpu per thread with it's own set of registers or are you
  AB going to context switch the virtual cpu?

it is not irrelevent IMO. since each thread will have a private parrot
interpreter, then parrot must minimize any globals so it can be almost
all stack based. this means the parrot control structure must be
malloc'd and only one pointer to it must be in some sort of global space
(for thread management). this structure will manage the PC, the stacks,
thread global vars, memory management (with or without gc?), etc.

  AB If it was one virtual cpu per thread then one would just create a
  AB new virtual cpu and feed it the bytecode stream?

that is the idea as i have understood it.

  AB Is there anything I could help with regarding this?

i think we need to design the (async) i/o and event subsystems either
before or in parallel to the thread subsystem. they will all be coupled
in various ways and it is better to do all the design first so you don't
have awkward interfaces later.

uri

-- 
Uri Guttman  -  [EMAIL PROTECTED]  --  http://www.sysarch.com
SYStems ARCHitecture and Stem Development -- http://www.stemsystems.com
Search or Offer Perl Jobs  --  http://jobs.perl.org



Re: Parrot multithreading?

2001-09-20 Thread Uri Guttman

 DS == Dan Sugalski [EMAIL PROTECTED] writes:

  DS I'm also seriously considering throwing *all* PerlIO code into separate 
  DS threads (one per file) as an aid to asynchrony.

but that will be hard to support on systems without threads. i still
have that internals async i/o idea floating in my numb skull. it is an
api that would look async on all platforms and will use the kernel async
file i/o if possible. it could be made thread specific easily as my idea
was that the event system was also thread specific.

as i just got my home boxes reorganized and my wife is actually getting
independent (drives herself now) from her broken leg, i will have some
more time to burn here. 

uri

-- 
Uri Guttman  -  [EMAIL PROTECTED]  --  http://www.sysarch.com
SYStems ARCHitecture and Stem Development -- http://www.stemsystems.com
Search or Offer Perl Jobs  --  http://jobs.perl.org



RE: Parrot multithreading?

2001-09-20 Thread Hong Zhang


   DS I'm also seriously considering throwing *all* PerlIO code into
 separate 
   DS threads (one per file) as an aid to asynchrony.
 
 but that will be hard to support on systems without threads. i still
 have that internals async i/o idea floating in my numb skull. it is an
 api that would look async on all platforms and will use the kernel async
 file i/o if possible. it could be made thread specific easily as my idea
 was that the event system was also thread specific.
 
I think we should have some thread abstraction layer instead of throwing
PerlIO into threads. The thread
abstraction layer can use either native thread package (blocking io), or
implement user level thread package
with either non-blocking io or async io. The internal io should be sync
instead of async. async is normally
slower than sync (most of unix don't have real async io), and thread is
cheap.

Hong



Re: Parrot multithreading?

2001-09-20 Thread Rocco Caputo

On Thu, Sep 20, 2001 at 12:33:54PM -0700, Hong Zhang wrote:
 
DS I'm also seriously considering throwing *all* PerlIO code into
  separate 
DS threads (one per file) as an aid to asynchrony.
  
  but that will be hard to support on systems without threads. i still
  have that internals async i/o idea floating in my numb skull. it is an
  api that would look async on all platforms and will use the kernel async
  file i/o if possible. it could be made thread specific easily as my idea
  was that the event system was also thread specific.

 I think we should have some thread abstraction layer instead of throwing
 PerlIO into threads. The thread
 abstraction layer can use either native thread package (blocking io), or
 implement user level thread package
 with either non-blocking io or async io. The internal io should be sync
 instead of async. async is normally
 slower than sync (most of unix don't have real async io), and thread is
 cheap.

I agree.  Threads, at least in spirit, provide a cleaner interface to
threading and asynchronous I/O than user level callbacks.  There's
nothing stopping a compiler from generating event driven code out of
procedural, perhaps even threaded code.  Consider this:
 
  sub read_console {
print while ();
  }
 
  sub read_log {
print while (LOG);
  }
 
  Thread-new( \read_console );
  Thread-new( \read_log );
  sleep 1 while threads_active();
  exit 0;

A compiler can either generate threaded code that's pretty close to
the original source, or it can generate asynchronous callbacks at the
bytecode level.  The same source code could compile and run on systems
that support asynchronous I/O, threads, or both.

Here's some parrot assembly that may or may not be legal at the
moment.  It shows what a compiler might do with the threaded source
code on a system that only supported asynchronous I/O.

read_console_entry:
  find_global  P1, STDIN, main
  set  I1, read_console_got_line
  jsr  set_input_callback
  return
read_console_got_line:
  # Assumes the AIO engine sets S1 with a read line.
  printS1
  return

read_log_entry:
  find_global  P1, LOG, main
  set  I1, read_log_got_line
  jsr  set_input_callback
  return
read_log_got_line:
  # Assumes the AIO engine sets S1 with a read line.
  printS1
  return

main:
  set  I1, read_console_entry
  jsr  thread_new
  set  I1, read_log_entry
  jsr  thread_new
main_timer_loop:
  set  I1, main_timer_done
  set  I2, 1
  jsr  set_timer
  return
main_timer_done:
  jsr  threads_active
  ne   I1, 0, main_timer_loop
  end

__END__

-- Rocco Caputo / [EMAIL PROTECTED] / poe.eekeek.org / poe.sourceforge.net



Re: Parrot multithreading?

2001-09-20 Thread Michael L Maraist

Arthur Bergman wrote:

 In an effort to rest my braine from a coredumping perl5 I started to think a bit on 
threading under parrot?

 While it has been decided that perl should be using ithread like threading, I guess 
that is irelevant at the parrot level. Are you
 going to have one virtual cpu per thread with it's own set of registers or are you 
going to context switch the virtual cpu?

 If it was one virtual cpu per thread  then one would just create a new virtual cpu 
and feed it the bytecode stream?

 Is there anything I could help with regarding this?

 Arthur

The context is almost identical to that of Perl5's MULTIPLICITY which passes the 
perl-interpreter to each op-code.  Thus there is
inherent support for multiple ithread-streams.  In the main-loop (between each invoked 
op-code) there is an event-checker (or was in
older versions at any rate).  It doesn't do anything yet, but it would make sence to 
assume that this is where context-switches
would occur, which would simply involve swapping out the current pointer to the 
perl-context; A trivial matter.

The easiest threading model I can think of would be to have a global var called 
next_interpreter which is always loaded in the
do-loop.  An asynchronous timer (or event) could cause the value of next_interpreter 
to be swapped.  This way no schedule
function need be checked on each operation.  The cost is that of an extra indirection 
once per op-code.

True MT code simply has each thread use it's own local interpreter instance.  MT-code 
is problematic with non MT-safe extensions
(since you can't enforce that).

In iThread, you don't have a problem with atomic operations, but you can't take 
advantage of multiple CPUs nor can you garuntee
prevention of IO-blocking (though you can get sneaky with UNIX-select).

-Michael




SV: Parrot multithreading?

2001-09-20 Thread Arthur Bergman



 Arthur Bergman wrote:
 
  In an effort to rest my braine from a coredumping perl5 I started to think a bit 
on threading under parrot?
 
  While it has been decided that perl should be using ithread like threading, I 
guess that is irelevant at the parrot level. Are you
  going to have one virtual cpu per thread with it's own set of registers or are 
you going to context switch the virtual cpu?
 
  If it was one virtual cpu per thread  then one would just create a new virtual cpu 
and feed it the bytecode stream?
 
  Is there anything I could help with regarding this?
 
  Arthur
 
 The context is almost identical to that of Perl5's MULTIPLICITY which passes the 
perl-interpreter to each op-code.  Thus there is
 inherent support for multiple ithread-streams.  In the main-loop (between each 
invoked op-code) there is an event-checker (or was in
 older versions at any rate).  It doesn't do anything yet, but it would make sence to 
assume that this is where context-switches
 would occur, which would simply involve swapping out the current pointer to the 
perl-context; A trivial matter.

Uhm, are you talking perl 5 here? The event checker checks for signals, we got safe 
signals now. MULTIPLICITY is just allowing multiple interpreters, ithreads is letting 
them run at the same time and properly clone them. If you want to use it switch 
interpreters at runtime for fake threads, patches are welcome, send it and I will 
apply it.

 The easiest threading model I can think of would be to have a global var called 
next_interpreter which is always loaded in the
 do-loop.  An asynchronous timer (or event) could cause the value of 
next_interpreter to be swapped.  This way no schedule
 function need be checked on each operation.  The cost is that of an extra 
indirection once per op-code.

 True MT code simply has each thread use it's own local interpreter instance.  
MT-code is problematic with non MT-safe extensions
 (since you can't enforce that).

I am sorry to say, but perl 5 is true MT.

 In iThread, you don't have a problem with atomic operations, but you can't take 
advantage of multiple CPUs nor can you garuntee
 prevention of IO-blocking (though you can get sneaky with UNIX-select).
 

Where did you get this breaking info? ithread works with multiple CPUs and IO blocking 
is not a problem.

Arthur




Re: Parrot multithreading?

2001-09-20 Thread Michael L Maraist



 What we're going to do is fire up a new interpreter for each thread. (We
 may have a pool of prebuilt interpreters hanging around for this
 eventuality) Threading *is* essential at the parrot level, and there are
 even a few (as yet undocumented) opcodes to deal with them, and some stuff
 that's an integral part of the variable vtable code to deal with it.
 Whether it's considered ithread-like or not's up in the air--it'll probably
 look a lot like a mix.

 I'm also seriously considering throwing *all* PerlIO code into separate
 threads (one per file) as an aid to asynchrony.


Just remember the cost in context-switching, plus the lack of scalability as
the number of file-handles increases.  Linux thread-context-switches are
relatively brutal compared to say Solaris.  Additionally you're consuming a
new stack area for each file-handle.  That's lots of overhead.

There are bound to be semi-portable methods of non-blocking IO.  UNIX-select
has to have an equiv on NT.  Granted it's a lot more complicated.  Basically
you could have IO ops trigger an event-based iThread swap, which causes select
to be invoked.  I've always thought this was the most efficient model for
single-CPU machines.  The biggest problem was always segmenting one's code
into call-backs.  Well, with op-codes, we have a natural division.  We have a
tight inner loop that occasionally hits a dispatcher on a complexity level
similar to a GUI.  You're much better prone to handle event-based operations
(which means higher level languages can be built atop such a parrot-design).

Food for thought.

-Michael




Re: Parrot multithreading?

2001-09-20 Thread Rocco Caputo

On Thu, Sep 20, 2001 at 04:13:48PM -0400, Michael L Maraist wrote:
 
  What we're going to do is fire up a new interpreter for each thread. (We
  may have a pool of prebuilt interpreters hanging around for this
  eventuality) Threading *is* essential at the parrot level, and there are
  even a few (as yet undocumented) opcodes to deal with them, and some stuff
  that's an integral part of the variable vtable code to deal with it.
  Whether it's considered ithread-like or not's up in the air--it'll probably
  look a lot like a mix.
 
  I'm also seriously considering throwing *all* PerlIO code into separate
  threads (one per file) as an aid to asynchrony.
 
 Just remember the cost in context-switching, plus the lack of scalability as
 the number of file-handles increases.  Linux thread-context-switches are
 relatively brutal compared to say Solaris.  Additionally you're consuming a
 new stack area for each file-handle.  That's lots of overhead.


One idea I haven't seen mentioned is have a fixed number of system
threads to service a potentially larger pool of parrot interpreters.
Essentially, physical threads become execution pipelines for the
virtual machine.  The limit on system threads can be tuned to
optimally spread execution across available CPUs.  It could be as
small as 1 on single-processor systems that don't switch thread
contexts well.

-- Rocco Caputo / [EMAIL PROTECTED] / poe.perl.org / poe.sourceforge.net





RE: Parrot multithreading?

2001-09-20 Thread Dan Sugalski

At 12:33 PM 9/20/2001 -0700, Hong Zhang wrote:

DS I'm also seriously considering throwing *all* PerlIO code into
  separate
DS threads (one per file) as an aid to asynchrony.
 
  but that will be hard to support on systems without threads. i still
  have that internals async i/o idea floating in my numb skull. it is an
  api that would look async on all platforms and will use the kernel async
  file i/o if possible. it could be made thread specific easily as my idea
  was that the event system was also thread specific.
 
I think we should have some thread abstraction layer instead of throwing
PerlIO into threads. The thread abstraction layer can use either native 
thread package (blocking io), or implement user level thread package
with either non-blocking io or async io.

I did say I was seriously considering it, not that I was going to do it. We 
may well just throw the PerlIO stuff (at least anything with a filter) into 
separate interpreters rather than separate threads. We'll see.

The internal io should be sync instead of async.

Nope. Internal I/O, at least as the interpreter will see it is async. You 
can build sync from async, it's a big pain to build async from sync. 
Doesn't mean we actually get asynchrony, just that we can.

async is normally slower than sync (most of unix don't have real async 
io), and thread is cheap.

Just because some systems have a really pathetic I/O system doesn't mean we 
should penalize those that don't...

Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: Parrot multithreading?

2001-09-20 Thread Dan Sugalski

At 04:36 PM 9/20/2001 -0400, Rocco Caputo wrote:
On Thu, Sep 20, 2001 at 04:13:48PM -0400, Michael L Maraist wrote:
  
   What we're going to do is fire up a new interpreter for each thread. (We
   may have a pool of prebuilt interpreters hanging around for this
   eventuality) Threading *is* essential at the parrot level, and there are
   even a few (as yet undocumented) opcodes to deal with them, and some 
 stuff
   that's an integral part of the variable vtable code to deal with it.
   Whether it's considered ithread-like or not's up in the air--it'll 
 probably
   look a lot like a mix.
  
   I'm also seriously considering throwing *all* PerlIO code into separate
   threads (one per file) as an aid to asynchrony.
 
  Just remember the cost in context-switching, plus the lack of 
 scalability as
  the number of file-handles increases.  Linux thread-context-switches are
  relatively brutal compared to say Solaris.  Additionally you're consuming a
  new stack area for each file-handle.  That's lots of overhead.


One idea I haven't seen mentioned is have a fixed number of system
threads to service a potentially larger pool of parrot interpreters.
Essentially, physical threads become execution pipelines for the
virtual machine.  The limit on system threads can be tuned to
optimally spread execution across available CPUs.  It could be as
small as 1 on single-processor systems that don't switch thread
contexts well.

That adds a level of complexity to things that I'd as soon avoid. On the 
other hand there's no reason we can't add it in later.

Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: Parrot multithreading?

2001-09-20 Thread Dan Sugalski

At 01:53 PM 9/20/2001 -0700, Damien Neil wrote:
On Thu, Sep 20, 2001 at 04:38:57PM -0400, Dan Sugalski wrote:
  Nope. Internal I/O, at least as the interpreter will see it is async. You
  can build sync from async, it's a big pain to build async from sync.
  Doesn't mean we actually get asynchrony, just that we can.

For clarification: do you mean async I/O, or non-blocking I/O?

Async. When the interpreter issues a read, for example, it won't assume the 
read completes immediately.

Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




RE: Parrot multithreading?

2001-09-20 Thread Hong Zhang


 Nope. Internal I/O, at least as the interpreter will see it is async. You 
 can build sync from async, it's a big pain to build async from sync. 
 Doesn't mean we actually get asynchrony, just that we can.
 
It is trivial to build async from sync, just using thread. Most Unix async
are built this way, using either
user level thread or kernel level thread. Win32 has really async io
implementation, but it does not interact
well with sync io.

 Just because some systems have a really pathetic I/O system doesn't mean
 we 
 should penalize those that don't...
 
Implement sync on top of async is also slower. I bet most people will use
sync io, not async one. There
is no need to build async io from sync, the async can be provided using
separate module.

It is not about some systems, it is about most systems. Very few systems
have high performance async io 
implementation. And the semantics is not very portable.

I am not sure if interpreter has to choose one over the other. The
interpreter could support both interface,
and use as needed.

Hong



Re: Parrot multithreading?

2001-09-20 Thread Uri Guttman

 DN == Damien Neil [EMAIL PROTECTED] writes:

  DN On Thu, Sep 20, 2001 at 04:57:44PM -0400, Dan Sugalski wrote:
   For clarification: do you mean async I/O, or non-blocking I/O?
   
   Async. When the interpreter issues a read, for example, it won't assume the 
   read completes immediately.

  DN That sounds like what I would call non-blocking I/O.  Async I/O
  DN would involve syscalls like aio_read().

you can't do non-blocking i/o on files without aio_read type calls. but
what dan is saying is that the api the interpreter uses internally will
be an async one. it will either use native/POSIX aio calls or simulate
that with sync calls and callbacks or possibly with threads.

  DN I'm being a bit pedantic here because I've been involved in heated
  DN debates in the past, which were resolved when the two sides realized
  DN that they were using different definitions of async I/O. :

pipe, socket and char device async i/o is different from file async
i/o. with pipes you are told when your request will work and then you
make it. with files you make the request and then get told when it was
done. both use callbacks and can be integrated under one async api. this
api is what parrot will see and a sync api will be layered on top of
this.

the async i/o will be tied to the event system for timeouts and safe
signals and such. 

uri

-- 
Uri Guttman  -  [EMAIL PROTECTED]  --  http://www.sysarch.com
SYStems ARCHitecture and Stem Development -- http://www.stemsystems.com
Search or Offer Perl Jobs  --  http://jobs.perl.org



Re: SV: Parrot multithreading?

2001-09-20 Thread Michael L Maraist

Arthur Bergman wrote:

  Arthur Bergman wrote:
 
   In an effort to rest my braine from a coredumping perl5 I started to think a bit 
on threading under parrot?
  
   While it has been decided that perl should be using ithread like threading, I 
guess that is irelevant at the parrot level. Are you
   going to have one virtual cpu per thread with it's own set of registers or are 
you going to context switch the virtual cpu?
  
   If it was one virtual cpu per thread  then one would just create a new virtual 
cpu and feed it the bytecode stream?
  
   Is there anything I could help with regarding this?
  
   Arthur
 
  The context is almost identical to that of Perl5's MULTIPLICITY which passes the 
perl-interpreter to each op-code.  Thus there is
  inherent support for multiple ithread-streams.  In the main-loop (between each 
invoked op-code) there is an event-checker (or was in
  older versions at any rate).  It doesn't do anything yet, but it would make sence 
to assume that this is where context-switches
  would occur, which would simply involve swapping out the current pointer to the 
perl-context; A trivial matter.

 Uhm, are you talking perl 5 here? The event checker checks for signals, we got safe 
signals now.

There wasn't any code for CHECK_EVENTS w/in Parrot when I first read the source-code.  
I merely assumed that it's role was not-yet determined, but considered the possible 
uses.  CHECK_EVENTS seems to be gone at the moment, so it's a moot point.


 MULTIPLICITY is just allowing multiple interpreters, ithreads is letting them run at 
the same time and properly clone them. If you want to use it switch interpreters at 
runtime for fake threads, patches are welcome, send it and I will apply it.



  The easiest threading model I can think of would be to have a global var called 
next_interpreter which is always loaded in the
  do-loop.  An asynchronous timer (or event) could cause the value of 
next_interpreter to be swapped.  This way no schedule
  function need be checked on each operation.  The cost is that of an extra 
indirection once per op-code.
 
  True MT code simply has each thread use it's own local interpreter instance.  
MT-code is problematic with non MT-safe extensions
  (since you can't enforce that).

 I am sorry to say, but perl 5 is true MT.

Yes, but that feature never got past being experimental.  I know of a couple DBDs that 
would not let you compile XS code with MT enabled since they weren't MT-safe.  The 
interpreter can be built MT-safe (java is a good example), but extensions are always 
going to be problematic. (Especially when many extensions are simply wrappers around
existing non-MT-aware APIs).  I think a good solution to them would be to tread it 
like X does (which says you can only run X-code w/in the main-thread).  An extension 
could say whether it was MT-safe or not, and be forced to be serialized w/in the 
main-physical-thread, which becomes the monitoring thread.  An alternative would be to 
simply
have XS code compile in a flag which says to throw an exception if the code is run 
outside of the main-thread;  Documentation would emphatically state that it's up to 
the user to design the system such that only the main-thread calls it.

On the side, I never understood the full role of iThreads w/in perl 5.6.  As far as I 
understood, it was merely used as a way of faking fork on NT by running multiple 
true-threads that don't share any globals.  I'd be curious to learn if there were 
other known uses for it.



  In iThread, you don't have a problem with atomic operations, but you can't take 
advantage of multiple CPUs nor can you garuntee
  prevention of IO-blocking (though you can get sneaky with UNIX-select).
 

 Where did you get this breaking info? ithread works with multiple CPUs and IO 
blocking is not a problem.

 Arthur

I'm under the impression that the terminology for iThreads assumes an independance of 
the physical threading model.  As other posters have noted, there are portability 
issues if we require hardware threading.  Given the prospect of falling back to 
fake-threads, then multi-CPU and IO blocking is problematic; though the latter can be 
avoided
/ minimized if async-IO is somehow enforced.  From my scarce exposure to the Linux 
Java movement, green-threads were considered more stable for a long time, even 
though the porters were just trying to get things to work on one platform.

I would definately like hardware threading to be available.  If nothing else, it lets 
students taking Operating Systems to experiment with threading w/o all the headaches 
of c.  (Granted there's Java, but we like perl)  However, I'm not convinced that 
threading won't ultimately be restrictive if used for generation operation (such as 
for the
IO-subsystem).  I'm inclined to believe that threading is only necessary when the user 
physically wants it (e.g. requests it), and that in many cases fake-threads fulfill 
the basic desires of everyone involved 

Re: SV: Parrot multithreading?

2001-09-20 Thread Dan Sugalski

At 05:23 PM 9/20/2001 -0400, Michael L Maraist wrote:
There wasn't any code for CHECK_EVENTS w/in Parrot when I first read the 
source-code.  I merely assumed that it's role was not-yet determined, but 
considered the possible uses.  CHECK_EVENTS seems to be gone at the 
moment, so it's a moot point.

There probably won't be any. The current thinking is that since the ops 
themselves will be a lot smaller, we'll have an explicit event checking op 
that the compiler will liberally scatter through the generated code. Less 
overhead that way.

Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: Parrot multithreading?

2001-09-20 Thread Dan Sugalski

At 02:04 PM 9/20/2001 -0700, Damien Neil wrote:
On Thu, Sep 20, 2001 at 04:57:44PM -0400, Dan Sugalski wrote:
  For clarification: do you mean async I/O, or non-blocking I/O?
 
  Async. When the interpreter issues a read, for example, it won't assume 
 the
  read completes immediately.

That sounds like what I would call non-blocking I/O.  Async I/O
would involve syscalls like aio_read().

Might sound that way, but it isn't. What I'm talking about is something like:

READ S3, P1, I0
X: SLEEP 3
EQ I0, 0, X
PRINT S3

Where we issue the read on the filehandle in P1, telling it to store the 
results in S3, and put the completion status in I0. The sleep will 
presumably be replaced by code that actually does something, and we wait as 
long as the completion register says we're not done.

Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: SV: Parrot multithreading?

2001-09-20 Thread Uri Guttman

 DS == Dan Sugalski [EMAIL PROTECTED] writes:


  DS There probably won't be any. The current thinking is that since
  DS the ops themselves will be a lot smaller, we'll have an explicit
  DS event checking op that the compiler will liberally scatter through
  DS the generated code. Less overhead that way.

we talked about that solution before and i think it has some
problems. what if someone writes a short loop. will it generate enough
op codes that a check_event one is emitted? do we always emit one in
loops? what about complex conditional code? i don't think there is an
easy way to guarantee events are checked with inserted op codes. doing
it in the op loop is better for this. or of course, go with an event
loop style dispatcher but then the perl level programs need to be
written for that style.

uri

-- 
Uri Guttman  -  [EMAIL PROTECTED]  --  http://www.sysarch.com
SYStems ARCHitecture and Stem Development -- http://www.stemsystems.com
Search or Offer Perl Jobs  --  http://jobs.perl.org



Re: Parrot multithreading?

2001-09-20 Thread Uri Guttman

 DS == Dan Sugalski [EMAIL PROTECTED] writes:

  DS Might sound that way, but it isn't. What I'm talking about is
  DS something like:

  DS READ S3, P1, I0
  DS X: SLEEP 3
  DS EQ I0, 0, X
  DS PRINT S3

  DS Where we issue the read on the filehandle in P1, telling it to
  DS store the results in S3, and put the completion status in I0. The
  DS sleep will presumably be replaced by code that actually does
  DS something, and we wait as long as the completion register says
  DS we're not done.

and internally tha READ op will do an aio_read if it is supported on
this platform. the sleep op is like wait in pdp-11 assembler. there you
could wait for interrupts to wake you up. that sleep op needs to do a
blocking operation like poll/select so it can release the cpu for other
threads/processes. it will be woken up by a signal that the file async
i/o is done

a variation is to have a WAIT op which waits for a particular io handle
to be done. it also will do some blocking select/poll call and let
itself be woken up as above. but it will check for its i/o being done
and go back to blocking sleep if it is not completed.

so you can issue an async i/o request anytime and sync up with it (with
WAIT) later when you want the data.

this model was in RT-11 30 years ago and it works well. you can have
async and sync i/o with a simple set of ops, READ, WRITE and WAIT.

we could also have a WAIT with a wild card arg too. it waits for any
completion of i/o and then other parrot code must check for what has
completed and deal with it.

uri

-- 
Uri Guttman  -  [EMAIL PROTECTED]  --  http://www.sysarch.com
SYStems ARCHitecture and Stem Development -- http://www.stemsystems.com
Search or Offer Perl Jobs  --  http://jobs.perl.org