On Mon, Jun 2, 2025, at 3:28 PM, Rowan Tommins [IMSoP] wrote: > On 02/06/2025 17:57, Larry Garfield wrote: >> Well, now you're talking about something with a totally separate compile >> step, which is not what Michael seemed to be describing at all. But it >> seems like that would be necessary. > > > There's definitely some crossed wires somewhere. I deliberately left > the mechanics vague in that last message, and certainly didn't mention > any specific compiler steps. I'm a bit lost which part you think is > "not what Michael seemed to be describing". > > > > Picking completely at random, a file in Monolog has these lines in: > > namespace Monolog\Handler; > ... > use Monolog\Utils; > ... > class StreamHandler extends AbstractProcessingHandler { > ... > $this->url = Utils::canonicalizePath($stream); > > > > My understanding is that our goal is to allow two slightly different > copies of that file to be included at the same time. As far as I know, > there have been two descriptions of how that would work:
This is what I was getting at. As I understand what Michael's examples have described, it allows pulling a different version of one or more files into some kind of container/special namespace/thingiewhatsit, at runtime. I fundamentally do not believe pulling arbitrary files into such a structure is wise, possible, or will achieve anything resembling the desired result, because *basically no application or library is single-file anymore*. If you try to side-load an interface out of Monolog, but not a class that implements that interface, now what? I don't even know what to expect to happen, other than "it will not work, at all." The *only* way I can see for this to work is to do as you described: Yoink *everything* in Monolog, Monolog's dependencies, and their dependencies, etc. into a container-ish thing that gets accessed in a different way than normal. That doesn't force Monolog to change; it just means that the plugin/module/library using Monolog has to do the leg-work to set up that package in some way before getting to runtime. It can't just say "grab these 15 files but not these 10 and pull them into a container," since it doesn't know which of those 15 files depend on those 10 files, or vice versa, or what other 20 files one of those 15 depends on that's in a totally different package. That's like trying to containerize pdo.so and xml.so, but not PHP-FPM or pdo_mysql.so. Technically Docker will let you, there's just no way it can lead to a working system. So if it would require globbing together a long chain of dependencies into a thing that you can reliably side-load at once, and expect that set of packages/versions to work together, then something that builds on Phar seems like the natural way to do that. Essentially using Phar files as the "container image." You're right, that may not be how Phar works today. But building that behavior on top of Phar seems vastly easier, more stable, and more likely to lead to something that vaguely resembles a working system than allowing arbitrary code to arbitrarily containerize arbitrary files at runtime and expecting it to go well. --Larry Garfield