On Mon, 25 Oct 2010 22:09:14 +0200 Juan Jose Garcia-Ripoll <juanjose.garciarip...@googlemail.com> wrote:
Since threads are not a standard part of CL, I'm rather confused about what should be expected of implementations. POSIX threads are standard however, and I have fair experience with them (I'll ask various questions relatively to them). There were important OS-specifics to take into consideration using them a decade or more ago (i.e. Linuxthreads did not conform on many aspects, notably signal delivery, some OSs only had user-threads implementations where blocking in a syscall meant blocking the whole process) but with the switch to NPTL on Linux, and the switch of more and more OSs from user or M:N implementations to LWP-based 1:1 (Solaris, NetBSD), we can expect more systems to behave expectedly (I have no idea about Windows, but noticed ECL has Windows-specific threading code). I'll ask POSIX thread centric questions, as that's what I'm used to for threading: With multiple POSIX threads, any thread not blocking a signal may be the one to actually receive it. This is where setting an appropriate signal mask for all threads but the one we really want to receive the signals (the "master" signal thread) controls who receives those (this is the current behaviour, with a single master thread receiving those signals)? > * During I/O, the functions will stop and return when a signal arrives. If I understand correctly, this corresponds to the interface expected from C with POSIX sigaction(2) without SA_RESTART, where many blocking system calls return -1 for error with errno set to EINTR. Personally this is a model I'm very used to from C. I know that this can be problematic with stdio (the portable interface being restrictive), and wonder if it also could be with the various high-level CL streams. Should those internal EINTR errors automatically be propagated as a CL restartable condition? > * At specific places where checks for pending signals are determined (+) The "master signal thread" then may decide to actually interrupt another thread using a signal which that thread doesn't block (or, queue an event on that/those thread's internal message queue, for that thread to check on that queue eventually when it can). Is this what you mean by "at specific places"? This would have the advantage of using no signal handling function (or a minimal signal handler function provided by ECL), and of then resuming said signal processing in normal user code. > * When user tells ECL to wait for signals. i.e. signals are propagated by the master thread using pthread_cancel(3) and the user yield-like function using pthread_testcancel(3) (with cancelstate PTHREAD_CANCEL_ENABLE and canceltype PTHREAD_CANCEL_DEFERRED)? > * As last resort measures: when a SIGSEGV/SIGBUS or during thread > cancellation. Does this mean that all signals are blocked (and ignored by all threads) except those critical ones? > > This last case would be the only situation in which one would be allowed to > exit the signal handler through non-local jump constructs. This would allow > safe cleanup -- well, not so safe, for the signal may arrive at an > inconvenient time. > > Cons: > * Deadlocks and infinite loops would only be resolved through the "last > resort" case. > * Contrary to people's expectations (?) > > Pros: > * Code can be written without constantly thinking about interrupt safety. > * POSIX compliant code, cooperates well with the libraries and OS. > * Cheaper and faster, unless we are forced to include pending signal checks > everywhere (See (+) above). > > The greatest problem would arise with interactive environments that have > these arcane expectations. I am thinking for instance about Maxima, where > users expect to have the possibility of interrupting computations, or Slime > users. I'm not sure I totally understand this scenario. One thing which also makes ECL different than most implementations is the ability to embed it into another C program as a configuration or extension language. I think I remember seeing that an existing application-created thread can be leased to ECL but I've not tried it myself. And signals can affect the whole process (all/any thread(s) not blocking said signal), of course. However, since said C applications might already exist and adding ECL as a "scripting" engine should ideally require the less invasive changes possible, perhaps that supporting more than one model might be needed to suit those applications? Does the current signaling code already had to take embedded/standalone mode into account? Thanks, -- Matt ------------------------------------------------------------------------------ Nokia and AT&T present the 2010 Calling All Innovators-North America contest Create new apps & games for the Nokia N8 for consumers in U.S. and Canada $10 million total in prizes - $4M cash, 500 devices, nearly $6M in marketing Develop with Nokia Qt SDK, Web Runtime, or Java and Publish to Ovi Store http://p.sf.net/sfu/nokia-dev2dev _______________________________________________ Ecls-list mailing list Ecls-list@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/ecls-list