Am Mi., 7. Sept. 2022 um 16:36 Uhr schrieb John Cowan <[email protected]>:
> > > On Wed, Sep 7, 2022 at 4:37 AM Marc Nieper-Wißkirchen < > [email protected]> wrote: > > The reason is that a continuation as we know from Scheme is usually >> delimited at the *prompt* of the REPL. In a strict sense, an undelimited >> continuation does not return by definition. But this is not true for an >> actual continuation returned by even the venerable call/cc. When it is >> invoked, it will deliver value at the REPL prompt. >> >> A "prompt" as defined by SRFI 226 just extends this scenario. >> > > This is an excellent explanation, and should be included. I bet lots of > people have no idea why the word "prompt" was chosen, and adding this helps > make it clear. (That said, the program may not have been running from a > REPL, in which case the prompt is that of the shell, or perhaps in an > embedded system a HALT instruction. Indeed, a truly undelimited > continuation would be one that jumped directly to the destruction of the > universe.) > :) Including this explanation (stealing some of your wording). > > For the moment, I have decided to leave "thread-terminate!" in. It won't >> guarantee immediate termination, but only eventual termination (at some >> safe point, dictated by the implementation). >> > > Which is to say that it guarantees nothing, because the thread may enter > an infinite and uninterruptible loop of trying and failing. Note also that > if the not-yet-terminated thread is hung, the thread that requested > termination is also hung, by the specification of thread-terminate!. > I would add a guarantee for when the to-be-terminated thread tries to lock a mutex, tries to sleep, waits for a mutex, etc. And yes, the requesting thread would have to wait. But in a useful implementation, "eventually" should be before the heat or cold death of the universe. > In any case, this is not the real evil of thread-terminate!: the evil is > that it breaks all locks relevant to the terminated thread, and therefore > leaves data structures in an inconsistent and potentially irrecoverable > state. Outside a debugging context where we are willing to trade off > stability for flexibility, the best implementation of `thread-terminate!` > is probably `(lambda (x) (exit #f))`. > Yes, data structures whose locks are in an abandoned state are potentially inconsistent and if they are the best one can do with them is to have them GC'ed. But this does not necessarily mean that the global state of a program becomes inconsistent, does it? Using atomic operations, one can also program the data structures in a way so that they can recover from an abandoned lock. Do you think it would make sense to add a primitive "critical-section" or "atomic" that can be used to execute code during which `thread-terminate!` is postponed? Such protected code must not call certain primitives like mutex operations, etc. In any case, my proposal was not to remove thread-terminate! from SRFI 18, > but rather to omit it from this SRFI by not copying it in. It would always > be possible to add it later if it is found to be truly indispensable. > You are about to at least convince me to put it in a separate library. > > >> Your wording looks good. In any case, I forget a "the" >> > > My native-speaker alarms are going off, but I need to see the whole > proposed revision. > The consolidated changes are in my private repo. > > >> SRFI 226 in itself does not make thread-safe programming easier. >> >>> > No. But channels and {go,co}routines, or in the alternative Concurrent ML > (which is only historically related to the language ML) really do. You > wind up with only two rules: Don't touch global state in a goroutine, and > don't create deadlock-prone networks (see < > http://www.jpaulmorrison.com/fbp/deadlock.htm> for discussion). > Fortunately, these facilities can be written on top of SRFI 18. > Yes, that's the idea. SRFI 18/226 are just the foundations. > ** 4.4 call/cc is usually advertised as the hammer that can be used to >>> implement all other control operations. In fact, this is not true for the >>> undelimited continuations that the standard call/cc captures but for >>> delimited continuations >>> >> > More precisely, call/cc *plus state* can implement delimited > continuations, but memory may fill up with unreclaimable garbage. > Yes, and you have to rewrite all procedures/syntax internally using call/cc (like "guard") from scratch. In any case, I meant the memory leaks (see also the paper of Gasbichler/Sperber I cited). > > >> What is an "entry"? >>> >> >> John already gave the answer. >> > > It's also implicitly defined in RnRS. > > No, I don't see how this can lead to confusion. It is even crucial >> because a continuation is not attached to a thread but can be passed around >> different threads. >> > > My reading of SRFI 18 does not actually guarantee that threads share > global state with other threads, in which case an implementation must be > free to reject an attempt to invoke a continuation on an arbitrary > different thread that might be running in a different Posix-level process. > Maybe SRFI 18 has to be tightened, but I think that as it stands it still > works for partially or completely resource-separated processes. > I think it guarantees shared global state because it talks about "the store" in the section "Memory coherency and the lack of atomicity". It also contains sentences like "The semantics are well defined even when a continuation created by another thread is invoked." The mutex examples also assume that the mutex state is shared across threads. In any case, SRFI 226 assumes one running Scheme process. > > >> ** 5.4.3 (call-with-continuation-prompt <var>thunk</var> >>> <var>prompt-tag</var> <var>handler</var>) >>> >> >>> That is a bit of an aesthetic nitpick, but why is the "prompt-tag" the >>> second argument rather than the first? >>> >> > It is usual for a procedure argument to appear first, where that is > practicable (but see below). > > This is just an opinion, but seeing a page of code that is tagged with >>> "something", and then having to turn the page to see what is the actual tag >>> name is not very ergonomic. >>> >> > If it troubles you, use `flip` or `swap` from draft SRFI 235. > > >> If the thunk aborts the current continuation (to the continuation prompt >>> in question), then, yes, there will be an infinite loop. >> >> > As the doctor says when you complain "It hurts when my raise my arm": "So > don't do that then!" The normal mechanism of exception handling with > `with-exception-handler` and `guard`, where the identity of the current > exception handler is temporarily unwound one step but the rest of the > dynamic environment remains intact, is by far the safer provision: any > exception raised inside the handler is passed to the dynamically enclosing > handler. Seizing direct control of the continuation isn't something you > should undertake casually. > > Is it true that if I only ever rely on the default prompt tag, I do not >>> need to import (srfi 226 control prompts)? >>> >> But if I actully supply the second argument to `call/cc`, I have to >>> import it? >>> >> >> I think so. >> > > That seems wrong to me: the result of importing standard libraries should > not be order-dependent. An implementation that supports this SRFI should > always accept one or two arguments to call/cc; either that, or the name > "call/cc" and its synonym "call-with-current-continuation" should be used > for the one-argument variety only. > Ah, now I understand the original question. The problem has gone away because I renamed the two-argument version following Marc's suggestion. It is now called `call-with-non-composable-continuation`. > > Just for clarity, what would (call-with-continuation-barrier values) do? >>> >> >> Deliver zero values to its continuation. >> > > It is worth saying so. What confuses one person may well confuse another. > The specific code "(call-with-continuation-barrier values)" is not in itself very enlightening. Maybe you, Vladimir, can say what exactly confuses you so that we can cook up a good example together. > > Why is `with-continuation-mark` a syntax, not a procedure accepting a >>> thunk? >>> >> >> Partly because of history. This comes from Racket. >> > > I think we should consider changing that. Racket has little respect for > backward compatibility: why should we, especially if it provokes > confusion? > Just because Racket seemingly has little respect is no good reason IMHO. In fact, w-c-m is much older than the current day Racket, which has been diverging from Scheme. The idea of continuation marks goes back to the paper "Modeling an Algebraic Stepper" by Clements/Flatt/Felleisen from 2001 and the name and syntax of w-c-m already appear there. In general a procedure provides more flexibility, syntax more convenience, > but it's easy to layer syntax over a procedure. CL uses the `with-*` > naming convention for syntax, but in Scheme it's almost always a procedure > that takes a thunk, though in this case the thunk is often the last > argument rather than the first. "A foolish consistency is the hobgoblin of > little minds", but there is also consistency which is not foolish. > It is also easy to layer a procedure over syntax but syntax makes it easier for compilers (because there is one less indirection and syntax is always inlined, which may not happen with procedures imported from pre-compiled libraries). The prefix `with-` does not seem to be used consistently in Scheme. For example, there is `with-syntax` in R6RS, which is a binding construct for pattern variables. My own SRFI 210 has `with-values`, which can be tracked down to a paper by Kent Dybvig (that paper also makes a point about why syntax is sometimes preferable; `with-values` is actually the syntactic version of `call-with-values`). In any case, I don't see a convincing argument for breaking compatibility with a form that has been around for more than 20 years in exactly the same situation. I agree that there is some irregularity, but this irregularity pervades the whole Scheme language. Your SRFI 235 shows how a lot of syntax could have specified as procedures, instead. > The use-case is, for example, implementing promises. In general, it is >> needed when a thunk is to be evaluated in a continuation without any >> continuation prompts, except for those that would be present if the thunk >> would be the entry point of a newly started thread. >> >> In itself, `call-in-initial-continuation` does not create a new thread >> (or subprocess). It just replaces - for the evaluation of the thunk - the >> current continuation by what was the initial continuation of the current >> thread. >> > > That's confusing, then: I would expect it to use the initial continuation > of the primordial thread. More emphatic wording may deal with this issue, > or perhaps a switch to "thread-initial continuation". > The latter would also be confusing, as a promise also has this kind of initial continuation. I will add a remark, explaining. > > ** 5.9 Evaluates to a promise that behaves as follows when forced in a >>> continuation: >>> >>> What does it mean "Forced in a continuation"? >>> Everything in Scheme has a continuation, hasn't it? >>> >> >> Yes. It is somewhat redundant. I added it because I refer to that >> continuation later ("forcing continuation"). >> > > I suggest "forced in a continuation k" and then just refer to k. > Done. > > >> What would you suggest? >> >> "current-exception-handler-stack" or "current-exception-handler-list" or >> just "exception-handler-stack"? >> > > I like "exception-handler-stack" best, or alternatively > "all-exception-handlers". > Done. > > "Aborted continuation" refers to the captured and recorded continuation >> (which is then aborted). >> > > Best to say so explicitly. > Done. > > I am sorry again for the long time delay. > > > I am glad (and I am sure Arthur is glad too) to see this sign of further > progress, as this work is part of the foundation of the Foundation(s) > Thanks. And, yes, I think this touches on one of the most fundamental parts of Scheme (like the question on implicit promises/forcing we are discussing elsewhere).
