On Wed, Mar 24, 2021 at 12:37 PM Grr <gebbe...@gmail.com> wrote:
> Looking for the right way to create a _very_ short delay (10-100 ns), I
> found clock_nanosleep, whose description says:
>
> "The suspension time caused by this function may be longer than requested
> because the argument value is rounded up to an integer multiple of the
> sleep resolution"
>
> What is the sleep resolution and where/how is defined?

I'm a little late to the party, but...

The resolution would likely be 20 milliseconds, which is far more than you want.

Here's how I get that 20 millisecond value:

If you do not configure CONFIG_USEC_PER_TICK to a custom value, it is
10,000 microseconds (10 milliseconds) per "tick" by default.

The way the logic in clock_nanosleep() is written, the minimum delay
ends up being 2 such ticks. I don't remember why and I can't seem to
find it in the code right now, but I know this because I checked into
it recently and found out that that's how it works.

Note that all the sleep functions, whether sleep(), usleep(),
nanosleep(), etc., promise to delay for AT LEAST the length of time
you specify. There is no upper limit to the length of the delay as
it's subject to scheduling, the resolution of the clock, and whatever
else is going on in the system.

It does not make sense to change the tick interval to a higher
resolution (shorter time) because then the OS will spend a
significantly increasing amount of time in useless interrupts etc.

Also, it does not make sense to use these functions for such short
delays in the nanosecond range. Just the processing overhead of
calling one of those functions is much more than 10 to 100 ns.

When I need such a short delay (e.g., when you need a delay between
setting Chip Select of a SPI peripheral and actual start of
communication), I measure how long a NOP instruction takes on the
microcontroller in question and I insert that many NOPs. If the delay
would require a lot of NOPs, you can use a for-loop with a volatile
loop counter ensuring that the compiler doesn't just optimize away the
loop. Note that if there's a task switch or interrupt during that
time, the delay will likely be much longer than you intend. If the
timing is critical, a simple hack is to use a critical section around
it, but I would use that as a last resort; first, I would look into
doing whatever needs that fine delay, e.g., waveform shaping, with
hardware instead.

These solutions are obviously very closely tied to the specific
microcontroller and its clock speed, so they're very non-portable. If
someone has a better suggestion, I'd love to learn about it!

Nathan

Reply via email to