Wouldn't you want to *force* the first thread to wait with
semaphore-wait/enable-break in that case? Since if they're disabled then
that thread can't be cooperatively terminated. If you use `semaphore-wait`
it seems like you completely hand off control over whether breaks are
enabled or not, which seems like something that use sites should care about
one way or the other. What sort of semaphore-based communication would be
truly indifferent to whether breaking is enabled?

On Sat, Jan 18, 2020 at 1:28 AM Alexis King <lexi.lam...@gmail.com> wrote:

> Actually, I change my mind, I can trivially think of a case where it’s
> fine: if you’re just using a semaphore as an event. One thread waits with
> `semaphore-wait`, another thread calls `semaphore-post`, and after the
> count is decremented, it’s never re-incremented. It’s just used to gate
> execution, not guard access to a resource. No need to disable breaks here.
>
> (Also, an aside: I think your `car`/`cdr` example is different, because
> `car`/`cdr`’s checks on pairs guard against memory corruption in the Racket
> runtime, and Racket is a memory-safe language. A better comparison would be
> that `car`/`cdr` don’t check whether or not their argument is a proper
> list—the higher-level `first`/`rest` do that, instead.)
>
> On Jan 18, 2020, at 03:21, Jack Firth <jackhfi...@gmail.com> wrote:
>
> It isn't clear to me either. I can't think of a use case for it, but I'm
> hoping either somebody else can or somebody can confirm that it's not a
> good API precedent. I'm trying to build some concurrency libraries
> <https://github.com/jackfirth/rebellion/issues/397> and I'd like to be
> sure there isn't some important use case I'm missing.
>
> On Sat, Jan 18, 2020 at 1:14 AM Alexis King <lexi.lam...@gmail.com> wrote:
>
>> Like I said, it isn’t clear to me that *all* uses of `semaphore-wait`
>> when breaks are enabled are incorrect. You could argue that then you should
>> have a `semaphore-wait/trust-me-even-though-breaks-are-enabled`, and sure,
>> I don’t think that would necessarily be bad. I just imagine the API just
>> wasn’t originally designed that way for some reason or another, possibly
>> simply because it wasn’t considered at the time. Maybe Matthew can give a
>> more satisfying answer, but I don’t know; I’m just speculating.
>>
>> On Jan 18, 2020, at 03:10, Jack Firth <jackhfi...@gmail.com> wrote:
>>
>> I don't see how it has to do with semaphores being low-level. If waiting
>> on a semaphore while breaks are enabled is almost certainly wrong, checking
>> whether breaks are enabled and raising an error seems like a way more
>> sensible default behavior than just silently doing something that's almost
>> certainly wrong. If car and cdr can check their arguments by default,
>> shouldn't semaphores guard against misuse too?
>>
>> On Sat, Jan 18, 2020 at 1:04 AM Alexis King <lexi.lam...@gmail.com>
>> wrote:
>>
>>> It *is *guaranteed to leave the semaphore in a consistent state, from
>>> the perspective of the implementation of semaphores. No matter what you do,
>>> you won’t ever corrupt a semaphore (assuming you’re not using unsafe
>>> operations and assuming the runtime is not buggy).
>>>
>>> But perhaps you mean inconsistent from the point of view of the
>>> application, not from the point of view of the Racket runtime. In that
>>> case, it’s true that when using semaphores as locks, using them in a
>>> context where breaks are enabled is almost certainly wrong. It’s not
>>> immediately clear to me that there aren’t any valid uses of semaphores
>>> where you would want breaks to be enabled, but I admit, I have no idea what
>>> they are.
>>>
>>> Semaphores are low-level primitives, though, so I think it makes some
>>> sense for them to just do the minimal possible thing. Perhaps a library
>>> ought to offer a slightly more specialized “critical section” abstraction a
>>> la Windows (or perhaps something like Haskell’s MVars) that manages
>>> disabling interrupts in the critical section for you. (Why doesn’t this
>>> exist already? My guess is that most Racket programmers don’t worry about
>>> these details, since they don’t call `break-thread` anywhere, and they want
>>> SIGINT to just kill their process, anyway.)
>>>
>>> On Jan 18, 2020, at 02:54, Jack Firth <jackhfi...@gmail.com> wrote:
>>>
>>> I do understand all of that, and you're right that "kill-safe" isn't
>>> what I meant.
>>>
>>> What I'm confused about is why, if it's inherently not guaranteed to
>>> leave the semaphore in a consistent state, semaphore-wait attempts to work
>>> at all if breaks are enabled. Why not raise some helpful error like "it's
>>> unsafe to wait on a semaphore while breaks are enabled, did you forget to
>>> disable breaks?". What's the actual *use case* for calling
>>> semaphore-wait (and *not* semaphore-wait/enable-break) while breaks are
>>> enabled?
>>>
>>> On Sat, Jan 18, 2020 at 12:47 AM Alexis King <lexi.lam...@gmail.com>
>>> wrote:
>>>
>>>> Killing a thread is different from breaking a thread. Killing a thread
>>>> kills the thread unrecoverably, and no cleanup actions are run. This
>>>> usually isn’t what you want, but there’s always a tension between these
>>>> kinds of things: defensive programmers ask “How do I make myself unkillable
>>>> so I can safely clean up?” but then implementors of a dynamic environment
>>>> (like, say, DrRacket) find themselves asking “How do I kill a runaway
>>>> thread?” Assuming you’re not DrRacket, you usually want `break-thread`, not
>>>> `kill-thread`.
>>>>
>>>> But perhaps you know that already, and your question is just about
>>>> breaking, so by “kill-safe” you mean “break-safe.” You ask why
>>>> `semaphore-break` doesn’t just disable breaking, but that wouldn’t help
>>>> with the problem the documentation alludes to. The problem is that there’s
>>>> fundamentally a race condition in code like this:
>>>>
>>>>     (semaphore-wait sem)
>>>>     ; do something important
>>>>     (semaphore-post sem)
>>>>
>>>> If this code is executed in a context where breaks are enabled, it’s
>>>> not break-safe whether or not `semaphore-wait` were to disable breaks while
>>>> waiting on the semaphore. As soon as `semaphore-wait` returns, the queued
>>>> break would be delivered, the stack would unwind, and the matching
>>>> `semaphore-post` call would never execute, potentially holding a lock
>>>> forever. So the issue isn’t that the semaphore’s internal state gets
>>>> somehow corrupted, but that the state no longer reflects the value you 
>>>> want.
>>>>
>>>> The right way to write that code is to disable breaks in the critical
>>>> section:
>>>>
>>>>     (parameterize-break #f
>>>>       (semaphore-wait sem)
>>>>       ; do something important
>>>>       (semaphore-post sem))
>>>>
>>>> This eliminates the race condition, since a break cannot be delivered
>>>> until the `semaphore-post` executes (and synchronous, non-break exceptions
>>>> can be protected against via `dynamic-wind` or an exception handler). But
>>>> this creates a new problem, since if a break is delivered while the code is
>>>> blocked on the semaphore, it won’t be delivered until the semaphore is
>>>> posted/unlocked, which may be a very long time. You’d really rather just
>>>> break the thread, since it hasn’t entered the critical section yet, anyway.
>>>>
>>>> This is what `semaphore-wait/enable-break` is for. You can think of it
>>>> as a version of `semaphore-wait` that re-enables breaks internally, inside
>>>> its implementation, and it installs an exception handler to ensure that if
>>>> a break is delivered at the worst possible moment (after the count has been
>>>> decremented but before breaks are disabled again), it reverses the change
>>>> and re-raises the break exception. (I have no idea if this is how it’s
>>>> actually implemented, but I think it’s an accurate model of its behavior.)
>>>> This does exactly what we want, since it ensures that if we do enter the
>>>> critical section, breaks are disabled until we exit it, but we can still be
>>>> interrupted if we’re blocked waiting to enter it.
>>>>
>>>> So it’s not so much that there’s anything really special going on here,
>>>> but more that break safety is inherently anti-modular where state is
>>>> involved, and you can’t implement `semaphore-wait/enable-break`-like
>>>> constructs if you only have access to the `semaphore-wait`-like sibling.
>>>>
>>>> > On Jan 17, 2020, at 22:37, Jack Firth <jackhfi...@gmail.com> wrote:
>>>> >
>>>> > The docs for semaphores say this:
>>>> >
>>>> > In general, it is impossible using only semaphore-wait to implement
>>>> the guarantee that either the semaphore is decremented or an exception is
>>>> raised, but not both. Racket therefore supplies semaphore-wait/enable-break
>>>> (see Semaphores), which does permit the implementation of such an exclusive
>>>> guarantee.
>>>> >
>>>> > I understand the purpose of semaphore-wait/enable-break, but there's
>>>> something about semaphore-wait that confuses me: why does it allow breaking
>>>> at all? My understanding is that if breaks are enabled, semaphore-wait
>>>> still tries to block and decrement the counter, even though a break at any
>>>> time could destroy the integrity of the semaphore. Does that mean it's not
>>>> kill-safe to use a semaphore as a lock? Wouldn't it be safer if
>>>> semaphore-wait automatically disabled breaks while waiting?
>>>>
>>>>
>>>
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Racket Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to racket-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/racket-users/CAAXAoJW5SMRY2MyJYbJTcJjzPAJXBVw_ZaET1Xu4QnmxLyfJTA%40mail.gmail.com.

Reply via email to