On 23. 05. 20 19:24, Matthew Flatt wrote:
> I'm not sure this is the problem that you're seeing, but I see a
> problem with the example. It boils down to the fact that futures do not
> provide concurrency.
> 
> That may sound like a surprising claim, because the whole point of
> futures is to run multiple things at a time. But futures merely offer
> best-effort parallelism; they do not provide any guarantee of
> concurrency.

It might be surprising as well - but that is exactly the reason I am
using futures for this task. Up until now I always built a data set and
then have setup a futures tree to process that data set - possibly in
parallel. But with no assumptions about whether it will happen in parallel.

What I am trying to achieve now is to allow the futures scheduling start
speculatively working on the data set while it is being created. And
yes, I don't want to make any assumptions about it actually starting or
even being run in parallel. I just want to provide the futures runtime
with the best setup, so it can also do the best.

Using os-threads with explicit scheduling is something I want to
investigate as well - but generally I do not see why this shouldn't work
as expected (with the great advantage that it can fallback on
program-level single-thread and not OS-run threads interleaved on the
same core transparent to the program).

> 
> As a consequence, trying to treat an fsemaphore as a lock can go wrong.
> If a future manages to take an fsemaphore lock, but the future is not
> demanded by the main thread --- or in a chain of future demands that
> are demanded by the main thread --- then nothing obliges the future to
> continue running; it can hold the lock forever.

Duly noted. I'll look into that possibility. But honestly, I've never
encountered such behavior.

> 
> (I put the blame on femspahores. Adding fsemaphores to the future
> system was something like adding mutation to a purely functional
> language. The addition makes certain things possible, but it also
> breaks local reasoning that the original design was supposed to
> enable.)

I understand and - another surprise - I agree. The example is very
"imperative", but that is really bare-bones example of the problem.

> 
> In your example program, I see
> 
>  (define workers (do-start-workers))
>  (displayln "started")
>  (for ((i 10000))
>    (mfqueue-enqueue! mfq 1))
> 
> where `do-start-workers` creates a chain of futures, but there's no
> `touch` on the root future while the loop calls `mfqueue-enqueue!`.
> Therefore, the loop can block on an fsemaphore because some future has
> taken the lock but stopped running for whatever reason.
> 
> In this case, adding `(thread (lambda () (touch workers)))` before the
> loop after "started" might fix the example. In other words, you can use
> the `thread` concurrency construct in combination with the `future`
> parallelism construct to ensure progress. I think this will work
> because all futures in the program end up in a linear dependency chain.
> If there were a tree of dependencies, then I think you'd need a
> `thread` for each `future` to make sure that every future has an active
> demand.

The same thing happens if I (do-start-workers) after filling the queue
using mfqueue-enqueue!.

> 
> If you're seeing a deadlock at the `(touch workers)`, though, my
> explanation doesn't cover what you're seeing. I haven't managed to
> trigger the deadlock myself.

Should I open an issue at github and provide the gdb backtraces?

Btw, none of this behaviour is seen with Racket CS (so far) and the
observed futures scheduling is speculative very aggresively with CS
variant (which is definitely what I want).

I am running 3m v7.7.0.6 compiled from sources on Ubuntu 20.04 if that
helps.



And (as usual) - thank you very much!

Dominik

-- 
You received this message because you are subscribed to the Google Groups 
"Racket Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to racket-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/racket-users/393d7b45-38bb-8384-a34c-7bacbd517424%40trustica.cz.

Reply via email to