That is interesting, Pascal.  Do you maybe have a pointer to a simple
description of the GO threading model?
(Asking is much faster and less time consuming than searching or, given the
Zeitgeist, trusting an AI).

All the best and Happy New Year

MA

On Sun, Dec 28, 2025 at 3:01 PM <[email protected]> wrote:

> Hi,
>
> Yes, mailboxes get you a long way. However, some nuances got a bit lost in
> this thread (and I apologise that I contributed to this).
>
> Something that is very relevant to understand in the Go context: Go
> channels are not based on pthreads, but they are based around Go’s own
> tasking model (which of course are in turn based on pthreads, but’s not
> that relevant). Go’s tasking model is an alternative to previous async
> programming models, where async code and sync code had to be written in
> different programming styles - that made such code very difficult to write,
> read and refactor. (I believe
> https://journal.stuffwithstuff.com/2015/02/01/what-color-is-your-function/ is
> the text that made that insight popular.)
>
> In Go, async code looks exactly the same as sync code, and you don’t even
> have to think about that distinction anymore. This is achieved by ensuring
> that all potentially blocking operations are effectively not blocking, but
> instead play nicely with the work-stealing scheduler that handles Go’s
> tasking model. So, for example, if a task tries to take a lock on a mutex,
> and that is currently not possible, the task gets swapped out and replaced
> by a different task that can continue its execution. This integration
> exists for all kinds of potentially blocking operations, including channels.
>
> With pthreads, a lock / mailbox / etc. that blocks can have the
> corresponding pthread replaced by another one, but that is much more
> expensive. Go’s tasks are handled completely in user space, not in kernel
> space. (And work stealing gives a number of very beneficial guarantees as
> well.)
>
> This nuance may or may not matter in your application, but it’s worth
> pointing out nonetheless.
>
> It would be really nice if Common Lisp had this as well, in place of a
> pthreads-based model, because it would solve a lot of issues in a very
> elegant way...
>
> Pascal
>
> On 27 Dec 2025, at 18:45, David McClain <[email protected]>
> wrote:
>
> Interesting about SBCL CAS.
>
> I do no use CAS directly in my mailboxes, but rely on Posix for them -
> both LW and SBCL.
>
> CAS is used only for mutation of the indirection pointer inside the 1-slot
> Actor structs.
>
> Some implementations allow only one thread inside an Actor behavior at a
> time. I have no restrictions in my implementations, so that I gain true
> parallel concurrency on multi-core architectures. Parallelism is automatic,
> and lock-free, but requires careful purely functional coding.
>
> Mailboxes in my system are of indefinite length. Placing restrictions on
> the allowable length of a mailbox queue means that you cannot offer
> Transactional behavior. But in practice, I rarely see more than 4 threads
> running at once. I use a Dispatch Pool of 8 threads against my 8 CPU Cores.
> Of course you could make a Fork-Bomb that exhausts system resources.
>
> On Dec 27, 2025, at 10:18, Manfred Bergmann <[email protected]>
> wrote:
>
>
>
> Am 27.12.2025 um 18:00 schrieb David McClain <[email protected]
> >:
>
> I've reached the conclusion that if you have first-class functions and the
> ability to create FIFO queue classes, you have everything you need. You
> don't need Go channels, or operating system threads, etc. Those are just
> inefficient, Greenspunian implementations of a simpler idea. In fact, you
> can draw diagrams of Software LEGO parts, as mentioned by dbm, just with
> draw.io and OhmJS and a fairly flexible PL. [I'd be happy to elaborate
> further, but wonder if this would be appropriate on this mailing list]
>
>
>
> This is essentially what the Transactional Hewitt Actors really are. We
> use “Dispatch” threads to extract messages (function args and function
> address) from a community mailbox queue. The Dispatchers use a CAS protocol
> among themselves to effect staged BECOME and message SENDS, with automatic
> retry on losing CAS.
>
> Messages and BECOME are staged for commit at successful exit of the
> functions, or simply tossed if the function errors out - making an
> unsuccessful call into an effective non-delivery of a message.
>
> Message originators are generally unknown to the Actors, unless you use a
> convention of providing a continuation Actor back to the sender, embedded
> in the messages.
>
> An Actor is nothing more than an indirection pointer to a functional
> closure - the closure contains code and local state data. The indirection
> allows BECOME to mutate the behavior of an Actor without altering its
> identity to the outside world.
>
> But it all comes down to FIFO Queues and Functional Closures. The
> Dispatchers and Transactional behavior is simply an organizing principle.
>
>
>
> Yeah, that’s exactly what Sento Actors (
> https://github.com/mdbergmann/cl-gserver/) are also about.
> Additionally, one may notice is that Sento has a nice async API called
> ’Tasks’ that’s designed after the Elixir example (
> https://mdbergmann.github.io/cl-gserver/index.html#SENTO.TASKS:@TASKS%20MGL-PAX:SECTION
> ).
> On another note is that Sento uses locking with Bordeaux threads (for the
> message box) rather than CAS, because the CAS implementations I tried (
> https://github.com/cosmos72/stmx and an CAS based mailbox implementation
> in SBCL) were not satisfactory. The SBCL CAS mailbox being extremely fast
> but had a high idle CPU usage, so I dropped it.
>
>
> Cheers
>
>
>
>

-- 
Marco Antoniotti, Professor, Director         tel. +39 - 02 64 48 79 01
DISCo, University of Milan-Bicocca U14 2043   http://dcb.disco.unimib.it
Viale Sarca 336
I-20126 Milan (MI) ITALY

REGAINS: https://regains.disco.unimib.it/

Reply via email to