Hi,

On Mon, Nov 24, 2025 at 2:23 PM Edmond Dantes <[email protected]> wrote:

> A programmer working with coroutines will not be able to use static or
> global variables to pass state.
>
> They have only two ways to do it:
> 1. Function parameters
> 2. use to capture variables in a closure
>
> This makes it impossible to accidentally shoot yourself in the foot. A
> developer can still do something silly by explicitly passing objects
> between coroutines, but now they are doing it consciously.
>
> Although even here we can go further and create something similar to
> ownership transfer of an object. In other words, an explicit semantics
> that clearly specifies what to do with the object:
>
> 1. Should the object be **moved** between coroutines?
>

This could be quite problematic for internal objects that might depend on
global that will switch.


> 2. Should the object be **cloned**?
>

Something like this should be done explicitly through channels otherwise it
would be quite strange semantically for users.


> 3. Is the object a special **shared** object that can be safely passed
> around?
>
>
I think it should be the only allowed case.


> Such semantics makes coroutines thread-safe.
>
> In other words, by designing a special parameter-passing semantics for
> coroutines, we can create a perfect specification for coroutines that
> can be run both in another thread and in the current one.
> At the same time, the memory model becomes equivalent to Erlang’s
> model, where there are no shared objects except for specific special
> ones.
>
> Such a change requires adding special **Shared** objects to the
> language, which can be safely used across different threads. And
> developers will no longer be able to use reference-based variables,
> except within something like `SharedBox<T>`.
> At the same time, implementing multi-threading support is not required
> immediately. But once this capability is added, the language semantics
> will already be fully prepared for it.
>
> So props/cons:
>
> 1. Coroutines become safe execution containers that cannot
> accidentally damage shared memory.
> 2. Old code requires no changes.
> 3. New coroutines cannot harm old code. And if parameter-passing
> semantics are introduced, they won’t be able to harm it at all. PHP
> will forbid a programmer from even trying to pass memory to another
> coroutine just like that.
> 4. The language semantics make it possible to describe fully
> thread-safe code, which can be added in the future at any time without
> major changes.
>
> **The cost:**
> 1. a developer must write a bit more code to work with shared objects
> between coroutines.
> 2. In such a memory model, you cannot obtain the result of a
> coroutine’s execution twice.
> 3. And this changes the philosophy of awaiting a coroutine: only one
> coroutine can wait for another. However, this limitation has many
> positive sides, because it greatly simplifies debugging.
>
> Such a memory model is quite modern and yet not new. It is essentially
> supported by Go, Erlang, and other next-generation languages.
> Therefore, if PHP’s strategy is to eventually become a language with
> parallelism while guaranteeing coroutine safety with respect to shared
> memory, then this is the right path.
> But I want to warn once again about the price that must be paid from
> the developer’s point of view. A developer will no longer be able to
> use reference variables or pass objects between coroutines.
>
>
If we could make this work, then it could be a much better result as it
could open doors for true parallelisation. It will add more limitations in
terms of sharing code but I think it would be worth it.

I think the first thing for that would be good to look at is to make the
needed changes for TSRM which would be already useful for FrankenPHP :
https://github.com/php/frankenphp/discussions/1980 .

The context switches will likely become more expensive but should be still
cheaper than long IO. But obviously not idea if IO is available and switch
is not needed. The scheduler might need to be a bit smarter and try to
reduce switches though. I started writing IO ring library that is a thin
wrapper for liburing on Linux and it has compatibility layer for other
platforms using IO threads: https://github.com/libior/ior . This is still
limited and supports only basic ops but I plan to add more ops (including
epoll based ops). Also it will need Windows support.. But the advantage of
ring buffer is that it can significantly reduce syscalls and check multiple
completions in one go (it has already queue for them) so it could more
easily reduce number of switches (e.g. it can immediately get info whether
IO is available by going through all the completion events). I think it
should be a bit more flexible for scheduler. Anyway it's more a detail at
this stage but maybe something to think about later.

Kind regards,

Jakub

Reply via email to