Hi there,

The semaphore code looks fine. As Andy noted, things get much more
complicated when the code needs to run across multiple concurrent jobs: We
might easily run into deadlocks.

Thus, if it turns out that concurrency gets a bigger issue, we would
probably embed key/value store updates into our Pending Update List
concept, even if it would make it less flexible to use.

Cheers,
Christian


On Sat, Feb 8, 2025 at 12:07 PM Marco Lettere <m.lett...@gmail.com> wrote:

> For us it can be solved with custom annotations or even constant strings
> because it all comes down to the one workflow engine ...
>
> Il sab 8 feb 2025, 11:55 Andy Bunce <bunce.a...@gmail.com> ha scritto:
>
>> One problem with this might be that:
>> When using the fork-join function it is easy to ensure all the threads
>> have a reference to the *same* semaphore.
>> If these were arbitrary BaseX "jobs" it is not clear how this could be
>> done without explicit support from the BaseX runtime.
>> Perhaps it could be done with a new annotation
>> %basex:semaphore ("my-semaphore")
>> that could be applied to functions.
>>
>> /Andy
>>
>> On Fri, 7 Feb 2025 at 18:31, Marco Lettere <m.lett...@gmail.com> wrote:
>>
>>> Oh, wow. Looks great Andy. Thanks for suggestion.
>>>
>>> Wonder to know what's Christian's opinion on this.
>>>
>>> M.
>>> On 07/02/25 16:45, Andy Bunce wrote:
>>>
>>> >so if you call store:get, store:put or store:write in the first
>>> process, a second process will not wait until the store operations are
>>> completed.
>>>
>>> In non XQuery contexts a semaphore [1] might be used to ensure that my
>>> other threads don't get between a get and put.
>>> In the spirit of blurring the XQuery Java boundaries I tried [2]. It
>>> seems to work. Is it dangerous?
>>>
>>> [1]
>>> https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/Semaphore.html
>>>
>>> [2]
>>> declare namespace Semphore ="java:java.util.concurrent.Semaphore";
>>>
>>> declare  function local:config-update($k as xs:string,$v as item(),$sem)
>>> {
>>>    Semphore:acquire($sem),
>>>    try{
>>>      let $u:=store:get("config")=> map:put($k, $v)
>>>      return store:put("config",$u)
>>>    }catch * {
>>>      trace("Errrr",$err:description)
>>>    }
>>>    ,Semphore:release($sem)
>>> };
>>>
>>> let $sem:=Semphore:new(1,true())
>>>
>>> let $s1 := store:put("config", map{})
>>> let $s2 := xquery:fork-join(
>>>   for $i in (1 to 100)
>>>   return function(){
>>>     let $r:=(prof:sleep(10),$i)
>>>     return local:config-update( string($i),$r,$sem) }
>>>   )
>>>
>>> return count(map:keys(store:get("config")))
>>>
>>>
>>> On Tue, 28 Jan 2025 at 14:09, Marco Lettere <m.lett...@gmail.com> wrote:
>>>
>>>> Ok, thanks for the clarification.
>>>>
>>>> M.
>>>> On 28/01/25 15:08, Christian Grün wrote:
>>>>
>>>> Sorry Christian, do you mean *not* synchronized?
>>>>>
>>>> With »synchronized«, I meant to refer to a lower level: You will not
>>>> end up with a corrupt key/value store or with I/O conflicts when accessing
>>>> and updating the store via multiple threads. However, as you have already
>>>> observed, multiple operations are not executed in a well-defined order, so
>>>> if you call store:get, store:put or store:write in the first process, a
>>>> second process will not wait until the store operations are completed.
>>>>
>>>>

Reply via email to