Re: [racket-users] Re: rackunit and logging

2020-05-24 Thread David Storrs
On Sat, May 23, 2020 at 9:54 AM Shriram Krishnamurthi 
wrote:

> Thank you all!
>
> *Dave*, the documentation style is fine, it's sometimes easier to read
> the doc right next to the implementation. (-:
>
> However, I'm not quite sure how even your example works. Maybe someone can
> check my logic? For instance, you say you want to write tests like
>
> (unless (is os 'windows) (ok test-that-won't-pass-on-windows))
>
> However, `is` seems to return the same value no matter whether the test
> passed or failed: it returns the first argument, *irrespective* of the
> outcome of the test. So in the above test, the returned value is going to
> be that of `os`, which is presumably some non-false value. That means the
> guarded test will *never* be run, on any OS.
>

You're absolutely right -- that should have been an `ok`, not an `is`.


> [Separately, I'm not sure why one would use a testing utility in that
> conditional, rather than just a standard conditional, but that's a
> different matter.]
>

A standard conditional would say "If we are being run on Windows...", where
the `ok` is saying "I expect that this test file is being run on Windows".


> In general, this seems to be a property of your underlying function,
> `test-more-check`: it returns either the return value sent in through
> #:return or the value in the checked position (#:got). But in either case,
> this is independent of the success of the test. The only difference is in
> the *message*, which is printed as output. I suppose I could parameterize
> where it's printed and capture it — but then I have to parse all the
> information back out. I'm just not seeing how to compositionally use your
> testing primitives?
>

Number of success and failures is available through (tests-passed) and
(tests-failed), so that's one option. (current-test-num), (inc-test-num!),
and (next-test-num) allow you to determine and modify the number of tests
that will be reported, so you can conditionally run a test and then pretend
it didn't happen if you don't like the outcome.   `ok` returns a boolean so
it can be used to conditionally run groups of tests.  The other functions
return their argument so you can chain it through a series of tests.

I'd be delighted to add more options if I knew that other people were using
the package -- just let me know.


> As an aside, when trying to install the package in a Docker container
> running Ubuntu 18.04 with Racket 7.7 installed, I got this error:
>
> raco setup: docs failure: query-exec: unable to open the database file
>   error code: 14
>   SQL: "ATTACH $1 AS other"
>   database: #
>   mode: 'read-only
>   file permissions: (write read)
>
> which I didn't get on macOS Catalina. The package certainly has a … lot of
> stuff! Even links to EDGAR filings. (-:
>

Yeah, it's an absolute junkpile.  When I was learning Racket I wrote all
these things but didn't have the sense to put them in separate modules.
Now that I'm older and hopefully wiser, in my Copious Free Time I'm working
on splitting it all into stand-alone modules and documenting everything,
but that's going slowly.  If test-more looks like something you might use
then I'll prioritize it so that you aren't stuck with the rest of the
kitchen sink.



> Thanks,
> Shriram
>
> --
> You received this message because you are subscribed to the Google Groups
> "Racket Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to racket-users+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/racket-users/CAJUf2yS7%2B2OSq899U5%2BQrgZwFNwdowQNdByU4AkxxwK%3Dby5wOQ%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Racket Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to racket-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/racket-users/CAE8gKod8%3DjzYFbbtz5_mp15cYxUq3gOQhf9UWmZtPo5z5a-pXw%40mail.gmail.com.


[racket-users] local variables are hyperlinked in scribble/manual

2020-05-24 Thread Jos Koot
Hi,
I have:

#lang scribble/manual
@(require (for-label racket) scribble/eval)
@interaction[
(let ((set 1)) (add1 set))]

I prepare a HTML document with DrRacket (in Windows 10).
Works, but local variable set is hyperlinked to procedure set in the documents 
(racket/set). I would like this variable to be typeset as any other local 
variable. How can I do that without loosing the hyperlink where I do mean the 
procedure from racket/set ?

Thanks, Jos

-- 
You received this message because you are subscribed to the Google Groups 
"Racket Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to racket-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/racket-users/5ecaa047.1c69fb81.1dbb1.cd9f%40mx.google.com.


Re: [racket-users] Hunting a possible fsemaphore-post/wait bug

2020-05-24 Thread Alexis King
I realized a little while after writing my previous message that I was probably 
misinterpreting you. I was envisioning you using box-cas! on a box containing a 
functional queue, so there would be no need to synchronize. You’d just pull the 
queue out of the box, functionally update it, and use box-cas! to store the new 
result back. But your original program is using an imperative queue, so it 
seems plausible you really were using box-cas! to implement a spinlock. Then 
you would indeed be busy-waiting, after all!

Rereading Matthew’s original email, I have no idea if he was suggesting to use 
a spinlock or to do something closer to what I had in mind. Either way, I 
thought it sounded fun to try to implement a lock-free imperative queue in 
Racket. Then (theoretically) you’d get the best of both worlds: less duplicated 
work and less allocation than the functional approach (and allocation is 
expensive for futures!), but no busy-waiting. Here is my solution: 
https://gist.github.com/lexi-lambda/c54c91867f931b56123e3c595d8e445a

As far as I can tell, it works nicely, and it’s not terribly complicated. 
Inspecting the behavior of my test cases in the futures visualizer, it seems to 
be doing good things. But I found I needed to use Matthew’s threads + futures 
trick, and in the process, I discovered a small subtlety needed to make it 
actually work. Here’s a self-contained program that reproduces the issue I ran 
into:

#lang racket
(require racket/logging)

(define (slow-sum)
  (for/sum ([j (in-range 1)]) 1))

(define (go)
  (define workers
(for/list ([i (in-range 8)])
  (thread (λ () (touch (future slow-sum))
  (for-each thread-wait workers))

(with-logging-to-port
 #:logger (current-logger)
 (current-output-port)
 (thunk (go) (newline) (newline) (newline) (go))
 'debug
 'future)

This runs the `go` procedure twice back-to-back, printing future trace messages 
to stdout. The first execution is good; futures start up in parallel:

future: id -1, process 0: created; time: 1590334415654.675049
future: id 1, process 1: started work; time: 1590334415654.749023
future: id 1, process 0: paused for touch; time: 1590334415654.894043
future: id -1, process 0: created; time: 1590334415654.912109
future: id 2, process 2: started work; time: 1590334415654.934082
future: id 2, process 0: paused for touch; time: 1590334415655.214111
future: id 1, process 1: completed; time: 1590334415878.041992
future: id 1, process 1: ended work; time: 1590334415878.049072
future: id 1, process 0: resumed for touch; time: 1590334415878.070068
future: id 2, process 2: completed; time: 1590334415878.167969
future: id 2, process 2: ended work; time: 1590334415878.173096
future: id 2, process 0: resumed for touch; time: 1590334415878.217041

(I reduced the number of futures here from 8 to 2 for this run to keep the log 
messages from being uselessly verbose for an email.)

However, on the second execution, I get no parallelism at all:

future: id -1, process 0: created; time: 1590334415878.292969
future: id 3, process 0: started work; time: 1590334415878.298096
future: id -1, process 0: created; time: 1590334415903.156006
future: id 4, process 0: started work; time: 1590334415903.163086
future: id 3, process 0: completed; time: 1590334416300.748047
future: id 3, process 0: ended work; time: 1590334416300.749023
future: id 4, process 0: completed; time: 1590334416322.684082
future: id 4, process 0: ended work; time: 1590334416322.684082

What’s going on? The problem is that `touch` is getting called before the 
futures have a chance to get started (possibly because the VM is now warmed up 
and things run faster?). Normally, that would only happen if the futures were 
really, really short: the first `touch` would run the first future on the main 
thread, and the other futures would start up in parallel while that first one 
is running. But in this program, each future is started on its own (green) 
thread, so I essentially created a race between the thread scheduler and the 
future scheduler.

It seems splitting the future creation from the thread creation is enough to 
make this issue go away:

(define (go)
  (define futures (for/list ([i (in-range 8)]) (future slow-sum)))
  (define workers (for/list ([f (in-list futures)])
(thread (λ () (touch f)
  (for-each thread-wait workers))

Now the futures always start up in parallel. This seems like it’s probably 
pretty reliable, so it isn’t really a problem, but I found the behavior 
surprising. It suggests the thread and future schedulers are blissfully unaware 
of one another: the thread scheduler is perfectly happy to run dozens of 
futures concurrently on the main OS thread rather than kick some of them onto 
another core. Normally this is something that work-stealing could fix, but it 
doesn’t seem like the futures scheduler ever steals work from futures executing 

Re: [racket-users] Hunting a possible fsemaphore-post/wait bug

2020-05-24 Thread Alexis King
> On May 24, 2020, at 02:10, Dominik Pantůček  
> wrote:
> 
> At first I was surprised that you are basically suggesting using
> spinlocks (busy-wait loops) instead of futex-backed (at least on Linux)
> fsemaphores. That is a waste of CPU time.

Performing CAS operations in a loop isn’t really directly comparable to 
spinlocking. With a spinlock, the blocked thread is spending cycles doing 
absolutely nothing for an arbitrarily-long time. The blocked thread is 
completely at the mercy of the thread that holds the lock, which may not 
release it for a lengthy duration even if the blocked thread only needs to 
acquire it briefly.

In contrast, lock-free concurrency approaches that use atomic operations never 
spend CPU time busy-waiting. All threads are always doing useful work; the 
worst case scenario is that work is duplicated. The burden here is reversed: if 
one thread needs to use the shared resource for a short period of time while a 
long-running computation is in progress on another thread, it doesn’t have to 
wait, it just wins and continues with its business. It’s the long-running 
computation that loses here, since it has to throw away the work it did and 
retry.

This means lock-free approaches are bad if you have (a) lots of contention over 
a shared resource, plus (b) long-running computations using the resource, which 
would be prohibitively-expensive to duplicate. But if contention is low and/or 
the operation is sufficiently cheap, the overhead of work duplication will be 
irrelevant. (This is why lock-free concurrency is sometimes called “optimistic 
concurrency”—you’re optimistically hoping nobody else needs the resource while 
you’re using it.) Furthermore, note that acquiring a mutex must ultimately use 
atomic operations internally, so if there is no contention, the lock-free 
approach will fundamentally be faster (or at least no slower) than the locking 
approach.

Now consider what blocking on a mutex must do in addition to the atomic 
operation: it must return to the thread scheduler, since a blocked thread must 
be added to a wakeup list and unscheduled. This incurs all the overhead of 
context-switching, which is small, but it isn’t zero. Compare that to the cost 
of a single enqueue or dequeue operation, which involve just a couple reads and 
a single write. Soon you end up in a situation where the cost of 
context-switching outweighs any work you might duplicate even if there is 
thread contention, and now you’re always beating the mutex under all usage 
patterns!

tl;dr: Even though lock-free approaches and spinlocks are superficially similar 
in that they involve “spinning in a loop,” it isn’t fair to conflate them! 
Optimistic concurrency is a bad idea if you have operations that need access to 
the shared resource for a long time, since you’ll end up duplicating tons of 
work, but if the operation is cheap, you lose nothing over a mutex-based 
approach and potentially gain a little extra performance.

Alexis

-- 
You received this message because you are subscribed to the Google Groups 
"Racket Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to racket-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/racket-users/C4901A78-33A7-40B7-AA20-6D5B504D9716%40gmail.com.


Re: [racket-users] Hunting a possible fsemaphore-post/wait bug

2020-05-24 Thread Dominik Pantůček


On 24. 05. 20 3:38, Matthew Flatt wrote:
> At Sat, 23 May 2020 18:51:23 +0200, Dominik Pantůček wrote:
>> But that is just where the issue is showing up. The real question is how
>> the counter gets decremented twice (given that fsemaphores should be
>> futures-safe).
> 
> I found a configuration that triggered the bug often enough on my
> machine. Yes, it was a dumb mistake in the implementation of
> fsemaphores. In your example, it wasn't so much that the counter
> fsemaphore was decremented twice, but that the lock fsemaphore could go
> negative --- so it didn't actually lock.
> 
> I've pushed a repair.

Thank you for the repair (and good idea with the test case).

> 
> 
> FWIW, after looking at this example more, I see why it's still not
> enough in principle to make a thread that waits on the result of
> `do-start-workers` or to run `do-start-workers` after the enqueue
> loops. The `start-workers` function can run a dequeue loop after
> starting a future to do the same, and before touching that future. So,
> the dependencies aren't as linear as I thought before.
> 
> If your real code looks anything like this, consider using a CAS
> operation, like `box-cas!`, instead of an fsemaphore as lock for the
> queue. The queue's use of an fsemaphore for counting and signaling
> seems fine, though.
> 

Yes, the real code is a binary tree of futures. However each future
performs a lot of fixnum/flonum operations.

At first I was surprised that you are basically suggesting using
spinlocks (busy-wait loops) instead of futex-backed (at least on Linux)
fsemaphores. That is a waste of CPU time... But yes, CAS-based locking
outperforms fsemaphores locking and distributes the work way more
evenly. Now when I see it in action and look at the source it kind of
makes sense as the spinlock does not need to wait long and does not
cause any re-scheduling (which is the reason why I get so uneven counts
with fsemaphore-based locking).

> In any case, the example worked well to expose the fsemaphore bug.
> 

Probably expect to see more in the future :)



Dominik

-- 
You received this message because you are subscribed to the Google Groups 
"Racket Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to racket-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/racket-users/9fdcb07a-a596-fc84-5529-4517f6b8b1fd%40trustica.cz.