HOWTO: Build an RT-application

From RTwiki
(Difference between revisions)
Jump to: navigation, search
m (How to deal with threads)
m
(21 intermediate revisions by 7 users not shown)
Line 1: Line 1:
 
<p>This document describes the steps to writing hard real time Linux programs while using the real time Preemption Patch.
 
<p>This document describes the steps to writing hard real time Linux programs while using the real time Preemption Patch.
It also describes the pitfalls that destroy the real time responsiveness. It focuses on x86 and ARM, although the concepts are also valid on other architectures, as long as Glibc is used. (Some fundamental parts lack in uClibc, like for example PI-mutex support and the control of malloc/new behavior, so uClibc is not recommended)</p>
+
It also describes the pitfalls that destroy the real time responsiveness. It focuses on x86 and ARM, although the concepts are also valid on other architectures, as long as Glibc is used. (Some fundamental parts lack in uClibc, like for example PI-mutex support and the control of malloc/new behaviour, so uClibc is not recommended)</p>
  
 
==Latencies==
 
==Latencies==
 
===Hardware causes of ISR latency===
 
===Hardware causes of ISR latency===
  
A good real time behavior of a system depends a lot on low latency interrupt handling.
+
A good real time behaviour of a system depends a lot on low latency interrupt handling.
Taking a look at the X86 platform, it shows that this platform is not optimized for RT usage. Several mechanisms cause ISR latencies that can run into the 10's or 100's of microseconds. Knowing them will enable you to make the best design choices on this platform to enable you to work around the negative impact.
+
Taking a look at the X86 platform, it shows that this platform is not optimised for RT usage. Several mechanisms cause ISR latencies that can run into the 10's or 100's of microseconds. Knowing them will enable you to make the best design choices on this platform to enable you to work around the negative impact.
* System Management Interrupt (SMI) on Intel x86 ICH chipsets: System Management Interrupts are being generated by the power management hardware on the board. SMI's are evil if real-time is required. First off, they can last for hundreds of microseconds, which for many RT applications causes unacceptable jitter. Second, they are the highest priority interrupt in the system (even higher than the NMI). Third, you can't intercept the SMI because it doesn't have a vector in the CPU. Instead, when the CPU gets an SMI it goes into a special mode and jumps to a hard-wired location in a special SMM address space (which is probably in BIOS ROM). Essentially SMI interrupts are "invisible" to the Operating System. Although SMI interrupts are handled by 1 processor at a time, it even effects real-time responsiveness on dual-core/SMP systems, because if the processor handling the SMI interrupt has locked a mutex or spinlock, which is needed by some other core, that other core has to wait until the SMI interrupt handler has been completed and a mutex/spinlock has been released. This problem also exists on RTAI and other OS-es, see for more info [http://cvs.gna.org/cvsweb/magma/base/arch/i386/calibration/README.SMI?cvsroot=rtai;rev=1.1]
+
 
* DMA bus mastering: Bus mastering events can cause long-latency CPU stalls of many microseconds. It can be generated by every device that uses DMA, such as SATA/PATA/SCSI devices and even network adapters. Also video cards that insert wait cycles on the bus in response to a CPU access can cause this kind of latency. Sometimes the behavior of such peripherals can be controlled from the driver, trading off throughput for lower latency. The negative impact of bus mastering is independent from the chosen OS, so this is not a unique problem for Linux-RT, even other RTOS-es experience these type of latency!
+
; System Management Interrupt (SMI) on Intel x86 ICH chipsets: System Management Interrupts are being generated by the power management hardware on the board. SMI's are evil if real-time is required. First off, they can last for hundreds of microseconds, which for many RT applications causes unacceptable jitter. Second, they are the highest priority interrupt in the system (even higher than the NMI). Third, you can't intercept the SMI because it doesn't have a vector in the CPU. Instead, when the CPU gets an SMI it goes into a special mode and jumps to a hard-wired location in a special SMM address space (which is probably in BIOS ROM). Essentially SMI interrupts are "invisible" to the Operating System. Although SMI interrupts are handled by 1 processor at a time, it even effects real-time responsiveness on dual-core/SMP systems, because if the processor handling the SMI interrupt has locked a mutex or spinlock, which is needed by some other core, that other core has to wait until the SMI interrupt handler has been completed and a mutex/spinlock has been released. This problem also exists on RTAI and other OS-es, see for more info [http://cvs.gna.org/cvsweb/magma/base/arch/i386/calibration/README.SMI?cvsroot=rtai;rev=1.1]
* On-demand CPU scaling: creates long-latency events when the CPU is put in a low-power-consumption state after a period of inactivity. Such problems are usually quite easy to detect. (e.g. On Fedora the 'cpuspeed' tool should be disabled, as this tool loads the on-demand scaling_governor driver)
+
 
* VGA Console: When the system is fulfilling its RT requirements the VGA Text Console must be left untouched. Nothing is allowed to be written to that console, even printk's are not allowed. This VGA text console causes very large latencies, up to more than hundreds of microseconds. It is better to use a serial console and have no login shell on the VGA text console. Also SSH or Telnet sessions can be used. The 'quiet' option on the kernel command line could also be useful to prevent preventing any printk to reach the console. Notice that using a graphical UI of X has no RT-impact, it is just the VGA text console that causes latencies.
+
; DMA bus mastering: Bus mastering events can cause long-latency CPU stalls of many microseconds. It can be generated by every device that uses DMA, such as SATA/PATA/SCSI devices and even network adapters. Also video cards that insert wait cycles on the bus in response to a CPU access can cause this kind of latency. Sometimes the behaviour of such peripherals can be controlled from the driver, trading off throughput for lower latency. The negative impact of bus mastering is independent from the chosen OS, so this is not a unique problem for Linux-RT, even other RTOS-es experience these type of latency!
 +
 
 +
; On-demand CPU scaling: creates long-latency events when the CPU is put in a low-power-consumption state after a period of inactivity. Such problems are usually quite easy to detect. (e.g. On Fedora the 'cpuspeed' tool should be disabled, as this tool loads the on-demand scaling_governor driver)
 +
 
 +
; VGA Console: When the system is fulfilling its RT requirements the VGA Text Console must be left untouched. Nothing is allowed to be written to that console, even printk's are not allowed. This VGA text console causes very large latencies, up to more than hundreds of microseconds. It is better to use a serial console and have no login shell on the VGA text console. Also SSH or Telnet sessions can be used. The 'quiet' option on the kernel command line could also be useful to prevent preventing any printk to reach the console. Notice that using a graphical UI of X has no RT-impact, it is just the VGA text console that causes latencies.
  
 
====Hints for getting rid of SMI interrupts on x86====
 
====Hints for getting rid of SMI interrupts on x86====
Line 18: Line 22:
 
     4) Disable TCO timer generation of SMIs (TCO_EN bit in the SMI_EN register).
 
     4) Disable TCO timer generation of SMIs (TCO_EN bit in the SMI_EN register).
 
The latency should drop to ~10us permanently, at the expense of not being able to use the i8xx_tco watchdog.
 
The latency should drop to ~10us permanently, at the expense of not being able to use the i8xx_tco watchdog.
<BR>One user of RTAI reported: In all cases, do not boot the computer with the USB flash stick plugged in. The latency will raise to 500us if you do so. Connecting and using the USB stick later does no harm, however.
+
 
 +
One user of RTAI reported: In all cases, do not boot the computer with the USB flash stick plugged in. The latency will raise to 500us if you do so. Connecting and using the USB stick later does no harm, however.
  
 
{{WARN|Do not ever disable the SMI interrupts globally. Disabling SMI may cause serious harm to your computer. On P4 systems you can '''burn your CPU to death''', when SMI is disabled. SMIs are also used to fix up chip bugs, so certain components may not work as expected when SMI is disabled. So, be very sure you '''know what you are doing''' before disabling any SMI interrupt. }}
 
{{WARN|Do not ever disable the SMI interrupts globally. Disabling SMI may cause serious harm to your computer. On P4 systems you can '''burn your CPU to death''', when SMI is disabled. SMIs are also used to fix up chip bugs, so certain components may not work as expected when SMI is disabled. So, be very sure you '''know what you are doing''' before disabling any SMI interrupt. }}
  
 
===Latencies caused by Page-faults===
 
===Latencies caused by Page-faults===
Whenever the RT process runs into a page-fault the kernel freezes the entire process (with all its threads in it), until the kernel has handled the page fault. There are 2 types of pagefaults, major and minor pagefaults. Minor pagefaults are handled without IO accesses. Major pagefaults are pagefaults that are handled by means of IO activity.
+
There are 2 types of page-faults, major and minor pagefaults. Minor pagefaults are handled without IO accesses. Major page-faults are page-faults that are handled by means of IO activity. The Linux page swapping mechanism can swap code pages of an application to disk, it will take a long time to swap those pages back into RAM. If such a page belongs to the realtime process, latencies are hugely increased. Page-faults are therefore dangerous for RT applications and need to be prevented.
Page faults are therefor dangerous for RT applications and need to be prevented.
+
  
If there is no Swap space used and no other applications stress the memory boundaries, then there is enough free RAM ready for the RT application to be used. In this case the RT-application will likely only run into minor pagefaults, which cause relatively small latencies.
+
If there is no Swap space being used and no other applications stress the memory boundaries, then there is probably enough free RAM ready for the RT application to be used. In this case the RT-application will likely only run into minor pagefaults, which cause relatively small latencies.
But, if the RT application is just one of the many applications on the system, and there is Swap space used, then special actions has to be taken to protect the memory of the RT-application.
+
Notice that pagefaults of one application cannot interfere the RT-behavior of another application.
If memory has to be retrieved from disk or pushed towards the disk to handle a page fault, the RT-application will experience very large latencies, sometimes up to more than a second! Notice that pagefaults of one application cannot interfere the RT-behavior of another application.
+
  
During startup a RT-application will always experience a lot of pagefaults. These cannot be prevented. In fact, this startup period must be used to claim and lock enough memory for the RT-process in RAM. This must be done in such a way that when the application needs to expose its RT capabilities, pagefaults do not occur anymore.
+
During startup a RT-application will always experience a lot of pagefaults. These cannot be prevented. In fact, this startup period must be used to claim and lock enough memory for the RT-process in RAM. This must be done in such a way that when the application needs to expose its RT capabilities, pagefaults do not occur any more.
  
 
This can be done by taking care of the following during the initial startup phase:
 
This can be done by taking care of the following during the initial startup phase:
 
* Call directly from the main() entry the mlockall() call.
 
* Call directly from the main() entry the mlockall() call.
* Create all threads at startup time of the application, and touch each page of the entire stack of each thread, OR do a mlockall() call ''after'' all threads have been created and verified running. Never start threads dynamically during RT show time, this will ruin RT behavior.
+
* Create all threads at startup time of the application. Never start threads dynamically during RT show time, this will ruin RT behaviour.
* Never use system calls that are known to generate pagefaults, such as fopen(). (Opening of files does the mmap() system call, which generates a page-fault).
+
* Reserve a pool of memory to do new/delete or malloc/free in, if you require dynamic memory allocation.
 
+
* Never use system calls that are known to generate pagefaults, like system calls that allocate memory inside the kernel.
[[Simple memory locking example]]
+
 
+
====How to use dynamic memory allocation====
+
In the [[Simple memory locking example]] is explained that all memory must be allocated and claimed, for the entire lifetime of the RT-application, at startup time, before the RT-application is going to fulfill its RT requirements.
+
If memory is allocated later on, this normally will result in pagefaults, and thus ruin the RT behavior of the application.
+
 
<BR>
 
<BR>
 +
There are several examples that show the several aspects of preventing page-faults. It depends on the your requirements which suits best for your purpose.
 +
* [[Simple memory locking example]]: Single threaded application doing a malloc() and make it safe to use.
 +
* [[Dynamic memory allocation example]]: Same as [[Simple memory locking example]], except it creates a pool of memory to be used for dynamic memory allocation
 +
* [[Threaded RT-application with memory locking and stack handling example]]: Same as [[Dynamic memory allocation example]], but now supports threads.
 +
*mlockall() should be called within the application to prevent the page out of memory for the real time application.
 
<BR>
 
<BR>
Q: So, we cannot run C++ applications with dynamic memory allocation?
+
====Global variables and arrays====
 +
Global variables and arrays are not part of the binary, but are allocated by the OS at process startup. The virtual memory pages associated to this data is not immediately mapped to physical pages of RAM, meaning that page faults occur on access. It turns out that the mlockall() call forces all global variables and arrays into RAM, meaning that subsequent access to this memory does not result in page faults. As such, using global variables and arrays do not introduce any additional problems for real time applications. You can verify this behaviour using the following program (run as 'root' to allow the mlockall() operation)
 
<BR>
 
<BR>
A: Wrong! Dynamic memory allocation is possible, if:
+
[[Verifying the absence of page faults in global arrays proof]]
* allocated memory, once committed and locked in RAM, is '''never''' given back to the kernel.
+
 
<BR>
 
<BR>
Q: How can this be achieved?
 
 
<BR>
 
<BR>
A: All memory allocation routines are implemented inside Glibc. Glibc translates each memory allocation request to a call to:
 
* mmap(): mmap maps in a certain amount of memory into the virtual memory space of the process. mmap() is usually faster than sbrk() for smaller memory allocations, or
 
* sbrk(): sbrk increases (or decreases) the memory block assigned to the process by a given size.
 
Glibc offers interfaces that can be used to configure its behavior related to these calls.
 
<BR>
 
Glibc can be configured how much memory must be released before calling sbrk() to give memory back to the kernel. It can also be configured when sbrk() is used instead of mmap()<BR>
 
What we need to do is to get rid of the mmap calls, and to configure glibc to never give memory back to kernel, until the process terminates. (of course).
 
<BR>
 
We use this (badly documented) call for it: int mallopt (int param, int value) (it is defined in malloc.h.)
 
When calling mallopt, the param argument specifies the parameter to be set, and value the new value to be set. Possible choices for param, as defined in malloc.h, are:
 
* M_TRIM_THRESHOLD: This is the minimum size (in bytes) of the top-most, releasable chunk that will cause sbrk() to be called with a negative argument in order to return memory to the system.
 
* M_TOP_PAD: This parameter determines the amount of extra memory to obtain from the system when a call to sbrk() is required. It also specifies the number of bytes to retain when shrinking the heap by calling sbrk() with a negative argument. This provides the necessary hysteresis in heap size such that excessive amounts of system calls can be avoided.
 
* M_MMAP_THRESHOLD: All chunks larger than this value are allocated outside the normal heap, using the mmap system call. This way it is guaranteed that the memory for these chunks can be returned to the system on free.
 
* M_MMAP_MAX: The maximum number of chunks to allocate with mmap. Setting this to zero disables all use of mmap.
 
<BR>
 
More background information on how to use this mallopt() call can be found at this paper: <BR>
 
http://www.usenix.org/publications/library/proceedings/als01/full_papers/ezolt/ezolt.ps
 
<BR>
 
<BR>
 
The [[Advanced memory locking example]] shows how we can create a pool of memory during startup, and lock it into memory.
 
At startup a block of memory is allocated through the malloc() call. Prior to it Glibc will be configured such that it uses the sbrk() call to fulfill this allocation. After locking it, we can free this block of memory, knowing that it is not released to the kernel and still assigned to our RT-process.<BR>
 
We have now created a pool of memory that will be used by Glibc for dynamic memory allocation. We can new() and delete() as much as we want without being interfered by any page fault! Even if the system is fully stressed, and swapping is continuously active, the RT-application will never run into any page fault...
 
 
Another possibility is to use a separate malloc tool like the [[O(1) Memory Allocator]] together with a preallocated and locked buffer (like the Simple memory locking example) which is used as memory pool for the custom Memory Allocator. In that case all the new, delete, malloc and free operators have to be redirected to this custom Memory Allocator.
 
 
====How to deal with threads====
 
While creating a new thread new memory will be allocated for a new stack and for the thread administration.
 
These allocations will result in new page faults. Therefore all threads need to be created at startup time, ''before'' RT show time.
 
<BR>
 
After a thread is created, all stack pages of that thread need to be forced to RAM to prevent page faults when it is accessed for the first time.
 
The entire stack of every thread inside the application is forced to RAM when mlockall(MCL_CURRENT) is called. Threads started after a call to mlockall(MCL_CURRENT | MCL_FUTURE) will generate page faults immediately since the new stack is immediately forced to RAM (due to the MCL_FUTURE flag). See [[Verifying mlockall() effects on stack memory proof]] for a piece of code that verifies this behavior.
 
<BR>
 
Threads are created with a default stack size of 8MB. Forcing 8MB to RAM per thread is overkill for most applications. If we leave the stack size default to 8MB, then we are probably out-of-memory in no-time. So, we need to figure out the maximum size of stack space used by a certain thread, and then create that thread with the amount of stack space it requires. You may add a little bit more, but surely nothing less.
 
<BR><BR>
 
Here is an example that shows how this can be done: [[Threaded RT-application with memory locking and stack handling example]]
 
 
====File handling====
 
File handling is known to generate disastrous pagefaults. So, if there is a need for file access from the context of the RT-application, then this can be done best by splitting the application in an RT part and a file-handling part. Both parts are allowed to communicate through sockets. I have never seen a page fault caused by socket traffic.
 
Note: While accessing files the low-level fopen() call will do a mmap() to allocate new memory to the process, resulting in a new pagefault.
 
  
====Priority Inheritance Mutex support====
+
===[[Priority Inheritance]] Mutex support===
A real-time system '''cannot''' be real-time if there is no solution for priority inversion, this will cause undesired latencies and even deadlocks. (see [http://en.wikipedia.org/wiki/Priority_inversion])
+
A real-time system '''cannot''' be real-time if there is no solution for [[priority inversion]], this will cause undesired latencies and even deadlocks. (see [http://en.wikipedia.org/wiki/Priority_inversion])
 
<BR>On Linux luckily there is a solution for it in user-land since kernel version 2.6.18 together with Glibc 2.5 (PTHREAD_PRIO_INHERIT).  
 
<BR>On Linux luckily there is a solution for it in user-land since kernel version 2.6.18 together with Glibc 2.5 (PTHREAD_PRIO_INHERIT).  
<BR>So, if user-land real-time is important, I highly encourage you to upgrade to at least these 2 versions. Other C-libraries like uClibc do not support PI-futexes at this moment, and are therefor less suitable for realtime!
+
<BR>So, if user-land real-time is important, I highly encourage you to use a recent kernel and Glibc-library. Other C-libraries like uClibc do not support PI-futexes at this moment, and are therefore less suitable for realtime!
  
Errata for ARM:
+
===The impact of the [[Big Kernel Lock]]===
On ARM the slow-path for PI-futexes is first integrated in the RT-patch 2.6.23.rc4-rt1. The patch is however easily back-portable to older kernels (>= 2.6.18) without breaking things. (Just check the file 'include/asm/futex.h' in the kernel code.)  
+
The Big Kernel Lock (BKL) is preemptible on Preempt-RT. BKL is backed by a mutex (rtmutex) in -rt instead of a regular spinlock. BKL is a special case lock that is released at schedule() then reacquired when the thread is woken up. This is a coarse grained lock that is used to protect the kernel in places that are not thread safe. It has special rules regarding its use and was designed to handle the cases where an IO call is blocked on a wait queue versus blocking as a result of contention from a sleep-able semaphore.
The futex slowpath on ARM requires the memory locking scheme as described above. The futex administration is never allowed to be paged out to disk, because the futex-administration memory is accessed with interrupts disabled. This was necessary because the ARM9 v4 and v5 cores do not have the required test-and-set atomic instructions to do it nicely.
+
This errata is not relevant to X86, because X86 supports the required atomic assembler instructions to do it properly without interrupt locking.
+
  
====Global variables and arrays====
+
Significant parts of the kernel still use BKL, Posix flock code namely, as well as other places. If a RT-thread uses a system call that locks the BKL; it can experience unbounded latencies when the BKL is locked by another thread. Any calls into the kernel, from a real time capable thread (SCHED_FIFO), must keep this into account otherwise priority inversions can take place. Just about every system call in the Linux kernel acquires a lock of some sort and can result in difficult to predict latencies and is especially the case because of the wide use BKL in non-thread safe places in the kernel.
Global variables and arrays are not part of the binary, but are allocated by the OS at process startup. The virtual memory pages associated to this data is not immediately mapped to physical pages of RAM, meaning that page faults occur on access. It turns out that the mlockall() call forces all global variables and arrays into RAM, meaning that subsequent access to this memory does not result in page faults. As such, using global variables and arrays does not introduce any additional problems for real time applications. You can verify this behavior using the following program (run as 'root' to allow the mlockall() operation)
+
<BR>
+
[[Verifying the absence of page faults in global arrays proof]]
+
<BR>
+
<BR>
+
===The impact of the Big Kernel Lock===
+
The Big Kernel Lock (BKL) is preemptible on Preempt-RT. This means the BKL has been replaced by a Mutex.
+
Several system calls still use the BKL, so if a RT-thread uses a system call that locks the BKL; it can experience unbounded latencies when the BKL is locked by another thread.
+
So, one must know the system calls that use the BKL, and must prevent a RT-thread from using these calls to minimize the latencies.
+
  
For example: The ioctl() handler in a character driver normally uses a BKL-locked variant of the handler, unless it is specified otherwise inside the driver:
+
One problematic place is the ioctl() handler in device driver layer. It normally acquires BKL on syscall entry and is released when coming back into userspace. However, there is a non-BKL acquiring variant of this handler that can be used instead, provided that the handler function is MP/thread safe:
  
 
     static struct file_operations my_fops = {
 
     static struct file_operations my_fops = {
Line 118: Line 70:
 
         .unlocked_ioctl = my_ioctl, /* This version does not use the BKL (Notice that this version requires a slightly different ioctl() argument list) */
 
         .unlocked_ioctl = my_ioctl, /* This version does not use the BKL (Notice that this version requires a slightly different ioctl() argument list) */
 
     };
 
     };
 +
 +
==Building Device Drivers==
 +
===Interrupt Handling===
 +
The RT-kernel handles all the interrupt handlers in thread context. However, the real hardware interrupt context is still available. This context can be recognised on the IRQF_NODELAY flag that is assigned to a certain interrupt handler during request_irq() or setup_irq(). Within this context a much more limited kernel API is allowed to be used.
 +
====Things you should not do in IRQF_NODELAY context====
 +
* Calling any kernel API that uses normal spinlocks. Spinlocks are converted to mutexes on RT, and mutexes can sleep due its nature. (Note: the atomic_spinlock_t types behave the same as on a non-RT kernel) Some kernel API's that can block on a spinlock/RT-mutex:
 +
** wake_up() shall not be used, use wake_up_process() instead.
 +
** up() shall not be used in this context, this is valid for all semaphore types, thus both ''struct compat_semaphore'', as well as ''struct semaphore''. (of course the same is valid for down()...)
 +
** complete(): Uses also a normal spinlock which is defined in 'struct __wait_queue_head' in wait.h, thus not safe.
  
 
==Author/Maintainer==
 
==Author/Maintainer==
Line 125: Line 86:
  
 
==Revision==
 
==Revision==
{| border="1" width="100%" summary="Revision history"
+
{| border="1" width="100%" summary="Revision history" class="prettytable"
 
! align="left" valign="top" colspan="2" | <b>Revision History</b>
 
! align="left" valign="top" colspan="2" | <b>Revision History</b>
 
|-  
 
|-  
| align="left" | Revision 6
+
| align="left" | Revision 8
| align="left" | 2008-01-15
+
| align="left" | 2009-11-15
 
|}
 
|}

Revision as of 12:08, 15 November 2009

This document describes the steps to writing hard real time Linux programs while using the real time Preemption Patch. It also describes the pitfalls that destroy the real time responsiveness. It focuses on x86 and ARM, although the concepts are also valid on other architectures, as long as Glibc is used. (Some fundamental parts lack in uClibc, like for example PI-mutex support and the control of malloc/new behaviour, so uClibc is not recommended)

Contents

Latencies

Hardware causes of ISR latency

A good real time behaviour of a system depends a lot on low latency interrupt handling. Taking a look at the X86 platform, it shows that this platform is not optimised for RT usage. Several mechanisms cause ISR latencies that can run into the 10's or 100's of microseconds. Knowing them will enable you to make the best design choices on this platform to enable you to work around the negative impact.

System Management Interrupt (SMI) on Intel x86 ICH chipsets
System Management Interrupts are being generated by the power management hardware on the board. SMI's are evil if real-time is required. First off, they can last for hundreds of microseconds, which for many RT applications causes unacceptable jitter. Second, they are the highest priority interrupt in the system (even higher than the NMI). Third, you can't intercept the SMI because it doesn't have a vector in the CPU. Instead, when the CPU gets an SMI it goes into a special mode and jumps to a hard-wired location in a special SMM address space (which is probably in BIOS ROM). Essentially SMI interrupts are "invisible" to the Operating System. Although SMI interrupts are handled by 1 processor at a time, it even effects real-time responsiveness on dual-core/SMP systems, because if the processor handling the SMI interrupt has locked a mutex or spinlock, which is needed by some other core, that other core has to wait until the SMI interrupt handler has been completed and a mutex/spinlock has been released. This problem also exists on RTAI and other OS-es, see for more info [1]
DMA bus mastering
Bus mastering events can cause long-latency CPU stalls of many microseconds. It can be generated by every device that uses DMA, such as SATA/PATA/SCSI devices and even network adapters. Also video cards that insert wait cycles on the bus in response to a CPU access can cause this kind of latency. Sometimes the behaviour of such peripherals can be controlled from the driver, trading off throughput for lower latency. The negative impact of bus mastering is independent from the chosen OS, so this is not a unique problem for Linux-RT, even other RTOS-es experience these type of latency!
On-demand CPU scaling
creates long-latency events when the CPU is put in a low-power-consumption state after a period of inactivity. Such problems are usually quite easy to detect. (e.g. On Fedora the 'cpuspeed' tool should be disabled, as this tool loads the on-demand scaling_governor driver)
VGA Console
When the system is fulfilling its RT requirements the VGA Text Console must be left untouched. Nothing is allowed to be written to that console, even printk's are not allowed. This VGA text console causes very large latencies, up to more than hundreds of microseconds. It is better to use a serial console and have no login shell on the VGA text console. Also SSH or Telnet sessions can be used. The 'quiet' option on the kernel command line could also be useful to prevent preventing any printk to reach the console. Notice that using a graphical UI of X has no RT-impact, it is just the VGA text console that causes latencies.

Hints for getting rid of SMI interrupts on x86

   1) Use PS/2 mouse and keyboard,
   2) Disable USB mouse and keyboard in BIOS,
   3) Compile an ACPI-enabled Kernel.
   4) Disable TCO timer generation of SMIs (TCO_EN bit in the SMI_EN register).

The latency should drop to ~10us permanently, at the expense of not being able to use the i8xx_tco watchdog.

One user of RTAI reported: In all cases, do not boot the computer with the USB flash stick plugged in. The latency will raise to 500us if you do so. Connecting and using the USB stick later does no harm, however.

ATTENTION!
Do not ever disable the SMI interrupts globally. Disabling SMI may cause serious harm to your computer. On P4 systems you can burn your CPU to death, when SMI is disabled. SMIs are also used to fix up chip bugs, so certain components may not work as expected when SMI is disabled. So, be very sure you know what you are doing before disabling any SMI interrupt.

Latencies caused by Page-faults

There are 2 types of page-faults, major and minor pagefaults. Minor pagefaults are handled without IO accesses. Major page-faults are page-faults that are handled by means of IO activity. The Linux page swapping mechanism can swap code pages of an application to disk, it will take a long time to swap those pages back into RAM. If such a page belongs to the realtime process, latencies are hugely increased. Page-faults are therefore dangerous for RT applications and need to be prevented.

If there is no Swap space being used and no other applications stress the memory boundaries, then there is probably enough free RAM ready for the RT application to be used. In this case the RT-application will likely only run into minor pagefaults, which cause relatively small latencies. Notice that pagefaults of one application cannot interfere the RT-behavior of another application.

During startup a RT-application will always experience a lot of pagefaults. These cannot be prevented. In fact, this startup period must be used to claim and lock enough memory for the RT-process in RAM. This must be done in such a way that when the application needs to expose its RT capabilities, pagefaults do not occur any more.

This can be done by taking care of the following during the initial startup phase:

  • Call directly from the main() entry the mlockall() call.
  • Create all threads at startup time of the application. Never start threads dynamically during RT show time, this will ruin RT behaviour.
  • Reserve a pool of memory to do new/delete or malloc/free in, if you require dynamic memory allocation.
  • Never use system calls that are known to generate pagefaults, like system calls that allocate memory inside the kernel.


There are several examples that show the several aspects of preventing page-faults. It depends on the your requirements which suits best for your purpose.


Global variables and arrays

Global variables and arrays are not part of the binary, but are allocated by the OS at process startup. The virtual memory pages associated to this data is not immediately mapped to physical pages of RAM, meaning that page faults occur on access. It turns out that the mlockall() call forces all global variables and arrays into RAM, meaning that subsequent access to this memory does not result in page faults. As such, using global variables and arrays do not introduce any additional problems for real time applications. You can verify this behaviour using the following program (run as 'root' to allow the mlockall() operation)
Verifying the absence of page faults in global arrays proof

Priority Inheritance Mutex support

A real-time system cannot be real-time if there is no solution for priority inversion, this will cause undesired latencies and even deadlocks. (see [2])
On Linux luckily there is a solution for it in user-land since kernel version 2.6.18 together with Glibc 2.5 (PTHREAD_PRIO_INHERIT).
So, if user-land real-time is important, I highly encourage you to use a recent kernel and Glibc-library. Other C-libraries like uClibc do not support PI-futexes at this moment, and are therefore less suitable for realtime!

The impact of the Big Kernel Lock

The Big Kernel Lock (BKL) is preemptible on Preempt-RT. BKL is backed by a mutex (rtmutex) in -rt instead of a regular spinlock. BKL is a special case lock that is released at schedule() then reacquired when the thread is woken up. This is a coarse grained lock that is used to protect the kernel in places that are not thread safe. It has special rules regarding its use and was designed to handle the cases where an IO call is blocked on a wait queue versus blocking as a result of contention from a sleep-able semaphore.

Significant parts of the kernel still use BKL, Posix flock code namely, as well as other places. If a RT-thread uses a system call that locks the BKL; it can experience unbounded latencies when the BKL is locked by another thread. Any calls into the kernel, from a real time capable thread (SCHED_FIFO), must keep this into account otherwise priority inversions can take place. Just about every system call in the Linux kernel acquires a lock of some sort and can result in difficult to predict latencies and is especially the case because of the wide use BKL in non-thread safe places in the kernel.

One problematic place is the ioctl() handler in device driver layer. It normally acquires BKL on syscall entry and is released when coming back into userspace. However, there is a non-BKL acquiring variant of this handler that can be used instead, provided that the handler function is MP/thread safe:

   static struct file_operations my_fops = {
       .ioctl          = my_ioctl, /* This line makes my ioctl() a BKL locked variant. */
       .unlocked_ioctl = my_ioctl, /* This version does not use the BKL (Notice that this version requires a slightly different ioctl() argument list) */
   };

Building Device Drivers

Interrupt Handling

The RT-kernel handles all the interrupt handlers in thread context. However, the real hardware interrupt context is still available. This context can be recognised on the IRQF_NODELAY flag that is assigned to a certain interrupt handler during request_irq() or setup_irq(). Within this context a much more limited kernel API is allowed to be used.

Things you should not do in IRQF_NODELAY context

  • Calling any kernel API that uses normal spinlocks. Spinlocks are converted to mutexes on RT, and mutexes can sleep due its nature. (Note: the atomic_spinlock_t types behave the same as on a non-RT kernel) Some kernel API's that can block on a spinlock/RT-mutex:
    • wake_up() shall not be used, use wake_up_process() instead.
    • up() shall not be used in this context, this is valid for all semaphore types, thus both struct compat_semaphore, as well as struct semaphore. (of course the same is valid for down()...)
    • complete(): Uses also a normal spinlock which is defined in 'struct __wait_queue_head' in wait.h, thus not safe.

Author/Maintainer

Remy Bohmer

Revision

Revision History
Revision 8 2009-11-15
Personal tools