… I think even our notion of Stack-based Call/Return architecture suffers the 
same kind of indeterminacy issues as just discussed. 

There is a finite limit to stack depth during execution, but we normally don’t 
run into it.

Just like I don’t normally run into blocking idle states due to running out of 
available threads/cores, or don’t normally fill up all of memory with unlimited 
FIFO depth.

> On Dec 28, 2025, at 05:57, David McClain <[email protected]> 
> wrote:
> 
> And the working solution that I have found for the blocking/non-blocking 
> issue is to have multiple cores and multiple Dispatch threads available.
> 
> This is not a provable working solution in all cases - it has the same kinds 
> of defects as unlimited-length FIFO message queues in the face of 
> Transactional Actor behavior. You can easily see that these solutions both 
> have potentially unbounded worst case behavior.
> 
> But in both cases the practical day-to-day depth is small, and not unbounded 
> as the worst case.
> 
> 
> 
> 
>> On Dec 28, 2025, at 05:51, David McClain <[email protected]> 
>> wrote:
>> 
>> … as for blocking/non-blocking code… 
>> 
>> How do you distinguish, as a caller, blocking from long-running computation? 
>> And what to do about it anyway? 
>> 
>> Even if our compilers were smart enough to detect possible blocking behavior 
>> in a called function, that still leaves my networking code prone to errors 
>> resulting from non-blocking, but long-running subroutines.
>> 
>> 
>> 
>>> On Dec 28, 2025, at 05:46, David McClain <[email protected]> 
>>> wrote:
>>> 
>>> Ahem…
>>> 
>>> I believe that this “Color of your Code” issue is what drove the invention 
>>> of Async/Await. 
>>> 
>>> Frankly, I find that formalism horrid, as it divorces the actual runtime 
>>> behavior from the code, which itself is written in a linear style.
>>> 
>>> My own solution is to fall back to the simplest possible Async model which 
>>> is Conventional Hewitt Actors, and a single shared communal event FIFO 
>>> queue to hold messages.
>>> 
>>> But that does indeed offer a different color from our more typical 
>>> Call/Return architecture. 
>>> 
>>> My solution for the conundrum has been that you want to use Call/Return 
>>> where it shines - the innards of math libraries for example, and then use 
>>> Async coding to thread together Leggo Block subsystems that need 
>>> coordination, e.g., CAPI GUI code with computation snippets.
>>> 
>>> Maybe I incorrectly find Async/Await a disgusting pretense?
>>> 
>>> - DM
>>> 
>>>> On Dec 28, 2025, at 05:17, [email protected] wrote:
>>>> 
>>>> Hi,
>>>> 
>>>> Yes, mailboxes get you a long way. However, some nuances got a bit lost in 
>>>> this thread (and I apologise that I contributed to this).
>>>> 
>>>> Something that is very relevant to understand in the Go context: Go 
>>>> channels are not based on pthreads, but they are based around Go’s own 
>>>> tasking model (which of course are in turn based on pthreads, but’s not 
>>>> that relevant). Go’s tasking model is an alternative to previous async 
>>>> programming models, where async code and sync code had to be written in 
>>>> different programming styles - that made such code very difficult to 
>>>> write, read and refactor. (I believe 
>>>> https://journal.stuffwithstuff.com/2015/02/01/what-color-is-your-function/ 
>>>> is the text that made that insight popular.)
>>>> 
>>>> In Go, async code looks exactly the same as sync code, and you don’t even 
>>>> have to think about that distinction anymore. This is achieved by ensuring 
>>>> that all potentially blocking operations are effectively not blocking, but 
>>>> instead play nicely with the work-stealing scheduler that handles Go’s 
>>>> tasking model. So, for example, if a task tries to take a lock on a mutex, 
>>>> and that is currently not possible, the task gets swapped out and replaced 
>>>> by a different task that can continue its execution. This integration 
>>>> exists for all kinds of potentially blocking operations, including 
>>>> channels.
>>>> 
>>>> With pthreads, a lock / mailbox / etc. that blocks can have the 
>>>> corresponding pthread replaced by another one, but that is much more 
>>>> expensive. Go’s tasks are handled completely in user space, not in kernel 
>>>> space. (And work stealing gives a number of very beneficial guarantees as 
>>>> well.)
>>>> 
>>>> This nuance may or may not matter in your application, but it’s worth 
>>>> pointing out nonetheless.
>>>> 
>>>> It would be really nice if Common Lisp had this as well, in place of a 
>>>> pthreads-based model, because it would solve a lot of issues in a very 
>>>> elegant way...
>>>> 
>>>> Pascal 
>>>> 
>>>>> On 27 Dec 2025, at 18:45, David McClain <[email protected]> 
>>>>> wrote:
>>>>> 
>>>>> Interesting about SBCL CAS. 
>>>>> 
>>>>> I do no use CAS directly in my mailboxes, but rely on Posix for them - 
>>>>> both LW and SBCL.
>>>>> 
>>>>> CAS is used only for mutation of the indirection pointer inside the 
>>>>> 1-slot Actor structs.
>>>>> 
>>>>> Some implementations allow only one thread inside an Actor behavior at a 
>>>>> time. I have no restrictions in my implementations, so that I gain true 
>>>>> parallel concurrency on multi-core architectures. Parallelism is 
>>>>> automatic, and lock-free, but requires careful purely functional coding.
>>>>> 
>>>>> Mailboxes in my system are of indefinite length. Placing restrictions on 
>>>>> the allowable length of a mailbox queue means that you cannot offer 
>>>>> Transactional behavior. But in practice, I rarely see more than 4 threads 
>>>>> running at once. I use a Dispatch Pool of 8 threads against my 8 CPU 
>>>>> Cores. Of course you could make a Fork-Bomb that exhausts system 
>>>>> resources.
>>>>> 
>>>>>> On Dec 27, 2025, at 10:18, Manfred Bergmann <[email protected]> 
>>>>>> wrote:
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>>> Am 27.12.2025 um 18:00 schrieb David McClain 
>>>>>>> <[email protected]>:
>>>>>>> 
>>>>>>>> I've reached the conclusion that if you have first-class functions and 
>>>>>>>> the ability to create FIFO queue classes, you have everything you 
>>>>>>>> need. You don't need Go channels, or operating system threads, etc. 
>>>>>>>> Those are just inefficient, Greenspunian implementations of a simpler 
>>>>>>>> idea. In fact, you can draw diagrams of Software LEGO parts, as 
>>>>>>>> mentioned by dbm, just with draw.io and OhmJS and a fairly flexible 
>>>>>>>> PL. [I'd be happy to elaborate further, but wonder if this would be 
>>>>>>>> appropriate on this mailing list]
>>>>>>> 
>>>>>>> 
>>>>>>> This is essentially what the Transactional Hewitt Actors really are. We 
>>>>>>> use “Dispatch” threads to extract messages (function args and function 
>>>>>>> address) from a community mailbox queue. The Dispatchers use a CAS 
>>>>>>> protocol among themselves to effect staged BECOME and message SENDS, 
>>>>>>> with automatic retry on losing CAS. 
>>>>>>> 
>>>>>>> Messages and BECOME are staged for commit at successful exit of the 
>>>>>>> functions, or simply tossed if the function errors out - making an 
>>>>>>> unsuccessful call into an effective non-delivery of a message. 
>>>>>>> 
>>>>>>> Message originators are generally unknown to the Actors, unless you use 
>>>>>>> a convention of providing a continuation Actor back to the sender, 
>>>>>>> embedded in the messages.
>>>>>>> 
>>>>>>> An Actor is nothing more than an indirection pointer to a functional 
>>>>>>> closure - the closure contains code and local state data. The 
>>>>>>> indirection allows BECOME to mutate the behavior of an Actor without 
>>>>>>> altering its identity to the outside world.
>>>>>>> 
>>>>>>> But it all comes down to FIFO Queues and Functional Closures. The 
>>>>>>> Dispatchers and Transactional behavior is simply an organizing 
>>>>>>> principle.
>>>>>> 
>>>>>> 
>>>>>> Yeah, that’s exactly what Sento Actors 
>>>>>> (https://github.com/mdbergmann/cl-gserver/) are also about.
>>>>>> Additionally, one may notice is that Sento has a nice async API called 
>>>>>> ’Tasks’ that’s designed after the Elixir example 
>>>>>> (https://mdbergmann.github.io/cl-gserver/index.html#SENTO.TASKS:@TASKS%20MGL-PAX:SECTION).
>>>>>> On another note is that Sento uses locking with Bordeaux threads (for 
>>>>>> the message box) rather than CAS, because the CAS implementations I 
>>>>>> tried (https://github.com/cosmos72/stmx and an CAS based mailbox 
>>>>>> implementation in SBCL) were not satisfactory. The SBCL CAS mailbox 
>>>>>> being extremely fast but had a high idle CPU usage, so I dropped it.
>>>>>> 
>>>>>> 
>>>>>> Cheers
>>>>> 
>>>> 
>>> 
>> 
> 

Reply via email to