re: Perceivable time differences [was Re: PSA: Clock drift and pkgin]

2023-12-31 Thread matthew green
Johnny Billquist writes:
> Ok. I oversimplified.
>
> If I remember right, the point was that something sub 200ms is perceived 
> by the brain as being "instananeous" response. It don't mean that one 
> cannot discern shorter times, just that from an action-reaction point of 
> view, anything below 200ms is "good enough".
>
> My point was merely that I don't believe you need to have something down 
> to ms resolution when it comes to human interaction, which was the claim 
> I reacted to.

mouse's example is actually not the limits.  humans can tell
audio down to about 1ms or less timing in the best cases.

reaction time has nothing to do with expected time when you're
doing music things.  you can't react to the beat, you have to
be ready to go at the same time, *MUCH* closer than 200ms, so
that all the musicians in a band are in sync.

what one needs from their computer is different for each of us
and while most tasks are fine with our current tickless setup,
there are plenty of cases we can do better with it.

note that tickless and hi-res timers are not really the same
thing even if we can achieve one by implementing the other, we
*could* introduce hi-res timers on machines with it, but it
would be easier with a tickless framework to use.


.mrg.


Re: Perceivable time differences [was Re: PSA: Clock drift and pkgin]

2023-12-30 Thread David Holland
On Sun, Dec 31, 2023 at 02:54:50AM +0100, Johnny Billquist wrote:
 > Ok. I oversimplified.
 > 
 > If I remember right, the point was that something sub 200ms is perceived by
 > the brain as being "instananeous" response. It don't mean that one cannot
 > discern shorter times, just that from an action-reaction point of view,
 > anything below 200ms is "good enough".

The usual figure cited is 100 ms, not 200, but yeah.

it is instructive to look at the stopwatch function on a digital
watch; you can easily see the tenths counting but not the 100ths.

-- 
David A. Holland
dholl...@netbsd.org


Re: Perceivable time differences [was Re: PSA: Clock drift and pkgin]

2023-12-30 Thread Johnny Billquist

Ok. I oversimplified.

If I remember right, the point was that something sub 200ms is perceived 
by the brain as being "instananeous" response. It don't mean that one 
cannot discern shorter times, just that from an action-reaction point of 
view, anything below 200ms is "good enough".


My point was merely that I don't believe you need to have something down 
to ms resolution when it comes to human interaction, which was the claim 
I reacted to.


  Johnny

On 2023-12-31 02:47, Mouse wrote:

? If I remember right, anything less than 200ms is immediate response
for a human brain.


"Response"?  For some purposes, it is.  But under the right conditions
humans can easily discern time deltas in the sub-200ms range.

I just did a little psychoacoustics experiment on myself.

First, I generated (44.1kHz) soundfiles containing two single-sample
ticks separated by N samples for N being 1, 101, 201, 401, 801, and
going up by 800 from there to 6401, with a second of silence before and
after (see notes below for the commands used):

for d in 0 100 200 400 800 1600 2400 3200 4000 4800 5600 6400
do
(count from 0 to 44100 | sed -e "s/.*/0 0 0 0/"
echo 0 128 0 128
count from 0 to $d | sed -e "s/.*/0 0 0 0/"
echo 0 128 0 128
count from 0 to 44100 | sed -e "s/.*/0 0 0 0/"
) | code-to-char > zz.$d
done

I don't know stock NetBSD analogs for count and code-to-char.  count,
as used here, just counts as the command line indicates; given what
count's output is piped into, the details don't matter much.
code-to-char converts numbers 0..255 into single bytes with the same
values, with non-digits ignored except that they serve to separate
numbers.  (The time delta between the beginnings of the two ticks is of
course one more than the number of samples between the two ticks.)

After listening to them, I picked the 800 and 1600 files and did the
test.  I grabbed 128 bits from /dev/urandom and used them to play,
randomly, either one file or the other, letting me guess which one it
was in each case:

dd if=/dev/urandom bs=1 count=16 |
   char-to-code |
   cvtbase -m8 d b |
   sed -e 's/./& /g' -e 's/ $//' -e 's/0/800/g' -e 's/1/1600/g' |
   tr \  \\n |
   ( exec 3>zz.list 4>zz.guess 5&3
audioplay -f -c 2 -e slinear_le -P 16 -s 44100 < zz.$n
skipcat 0 1 0<&5 1>&4
 done
   )

char-to-code is the inverse of code-to-char: for each byte of input, it
produces one line of output containing the ASCII decimal for that
byte's value, 0..255.  cvtbase -m8 d b converts decimal to binary,
generating a minimum of 8 "digits" (bits) of output for each input
number.  skipcat, as used here, has the I/O behaviour of "dd bs=1
count=1" but without the blather on stderr: it skips no bytes and
copies one byte, then exits.  (The use of /dev/urandom is to ensure
that I have no a priori hint which file is being played which time.)

I then typed "s" when I thought it was a short-gap file and "l" when I
thought it was a long-gap file.  I got tired of it after 83 data
samples and killed it.  I then postprocessed zz.guess and compared it
to zz.list:

< zz.guess sed -e 's/s/800 /g' -e 's/l/1600 /g' | tr \  \\n | diff -u zz.list -

I got exactly two wrong out of 83 (and the stats are about evenly
balanced, 39 short files played and 44 long).  So I think it's fair to
say that, in the right context (an important caveat!), a time
difference as short as (1602-802)/44.1=18.14+ milliseconds is clearly
discernible to me.

This is, of course, a situation designed to perceive a very small
difference.  I'm sure there are plenty of contexts in which I would
fail to notice even 200ms of delay.

/~\ The ASCII Mouse
\ / Ribbon Campaign
  X  Against HTML   mo...@rodents-montreal.org
/ \ Email!   7D C8 61 52 5D E7 2D 39  4E F1 31 3E E8 B3 27 4B


--
Johnny Billquist  || "I'm on a bus
  ||  on a psychedelic trip
email: b...@softjar.se ||  Reading murder books
pdp is alive! ||  tryin' to stay hip" - B. Idol