On 02.12.2024 16:00, ichthyo wrote:
With adding that, what guarantees do we have at that point?

- The LV2 standard provides that we can see the Core plugin

- The sync on corePlugin->isReady ensures that we see the changes made by
the corePlugin *before setting this flag*
On 03.12.24 11:11, Kristian Amlie wrote:
But I don't think the second point is necessary, actually.

The proper handling of thread visibility and sync issues revolves
around what can be guaranteed, rsp. what can be shown or concluded
clearly.

Practically speaking, since much of this stuff is so insidious and
subtle, the more indirect a chain of arguments is, the higher the
risk that there is a flaw in the argument, or that the chain breaks
due to some circumstantial change.


YoshimiLV2PluginUI::init() cannot possibly be called until
YoshimiLV2Plugin::instantiate() has returned, due to the first point.
At least not based on my understanding of the LV2 spec.

This is certainly true, but is a very much indirect argument.
The fact that the UI plugin shall only be instantiated after the core plugin
becomes ready implies in practical terms that any conceivably sensible host
implementation will likely issue a barrier of sorts. Unless ... we have
a host implementation which attempts to be especially "clever", or unless
we are on an architecture which gives only the minimum visibility guarantees.
;-)

Let me dwell a bit on the last point.
If you look up the formal specs and memory models, you see a lot of these
"happens-before" and "synchronises with" relationships. These formal specs
always cover only data elements either directly involved into the interaction
or visible effects directly related to the thread's actions before the barrier.

But the practical reality is quite different. Mostly, we have to deal with
x86_64 machines. And those were built to be especially "developer friendly",
for obvious reasons (they wanted to push chip sales by a coolness factor)

That means, a typical x86_64 does lots of magic to behave sane even if
concurrency and visibility issues are blatantly ignored by the coders.
And on top of that, on x86_64, barriers are handled very defensively
and mostly global. A huge amount of circuity is built into those
CPUs in order to make that happen.

In theory it would be possible to build much less expensive circuity
with decent performance by just being nasty and sticking to the letter
of the specs and memory models of modern programming languages.
And I have seen reports of some ARM flavours to be much more picky
in this respect.

The point I want to drive home is thus, if you see a "happens before"
guarantee, this does not imply that you get global data visibility
for unrelated stuff.

For me as a developer this means, that -- instead of arguing that something
"should" be thread safe -- it is always better to use the tools at our
disposal and be explicit with each and every access to shared data.


Which btw, I guess you have already done according to the below paragraph?

So in order to solve the *thread visibiltiy* issue, it is sufficient to
slightly adjust the private methods within InstanceManager so to ensure
that all call paths are protected by the InstanceManager mutex.

Right?

Yes, that was my thinking.
Doing it this way can be considered more future-proof, simply because
it is explicit. All the shared data is directly touched by the acting
thread in a zone between a read barrier and a write barrier (entering
and leaving a mutex protected zone)

Which shows time and again, that arriving at a clearer solution.
often takes several half broken attempts and incremental improvements

:-P ;-)




_______________________________________________
Yoshimi-devel mailing list
Yoshimi-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/yoshimi-devel

Reply via email to