In this context, the term "real time" is somewhat misleading. It refers to the 
perception of a real time response from the machine when a user provides input. 
Whenever you provide input (mouse, keyboard, etc) this causes an interrupt that 
puts whatever is currently being processed on hold in order to respond to the 
user. 

This is where "PREEMPT" comes into play, user activity interrupts whatever is 
currently being worked on. The PREEMPT model will determine exactly how this 
interrupt occurs, and what impact is has on other processes on the system. All 
recent kernel versions are PREEMPT to some extent. Windows is also aggressively 
PREEMPTive and not in a good way ;)


You might want to take a look at the CONFIG_HZ option as it determines the 
timer frequency used for these interrupts. On a desktop system, it is set to 
1000 Hz. For servers it is recommended setting lower. Quote from the help 
dialog in make menuconfig:
"
  │ Allows the configuration of the timer frequency. It is customary            
                                │
  │ to have the timer interrupt run at 1000 Hz but 100 Hz may be more           
                                │
  │ beneficial for servers and NUMA systems that do not need to have            
                                │
  │ a fast response for user interaction and that may experience bus            
                                │
  │ contention and cacheline bounces as a result of timer interrupts.           
                                │
  │ Note that the timer interrupt occurs on each processor in an SMP            
                                │
  │ environment leading to NR_CPUS * HZ number of timer interrupts              
                                │
  │ per second.
"

The speed at which data is received by an input, processed, and sent to an 
output is a different metric and is going to vary depending on which subsystems 
are involved. 
-Ben

On Tuesday, April 30th, 2024 at 9:01 PM, Keith Lofstrom <kei...@keithl.com> 
wrote:

> "Regular" Linux is designed for a useful user experience.
> There are two (or more?) "Real Time Linux" distros that
> are designed to control hardware: RTLinux, and Real-Time
> Linux with PREEMPT_RT patches added to the kernel.
> I have not found details of benchmarks for these.
> How "real" is real? Millisecond response? Microsecond?
> Less?
> 
> I can imagine designing a PCIe card for network test;
> full throttle rapid packet speed test, but also (with
> some "simple" analog circuitry controlled by "real time"
> software, cable testing with nanosecond TDR (time delay
> reflectometry), using a technique resembling "count the
> falling dominos by the loudness of the clatter they make".
> That would require microsecond response in a tight realtime
> loop, controlling analog circuitry that converts
> microsecond intervals into brief nanosecond intervals.
> 
> ( Also using an analog circuit technique called
> "dual slope", which we need not dwell on here )
> 
> A Cat 6 cable is 5 nanoseconds per meter, 10 ns down and
> back. Some degradations (such as RJ45 connectors) are
> smaller, but can add up and limit bandwidth. The right
> hardware and software might help network engineers observe
> and understand these fast phenomena, and cure some rather
> subtle gigabit-rate signalling and cabling problems.
> 
> Keith L.
> --
> Keith Lofstrom kei...@keithl.com

Reply via email to