pi_stress(8) a stress test for POSIX Priority Inheritance mutexes

SYNOPSIS

pi_stress [-i|--inversions inversions] [-t|--duration seconds] [-g|--groups groups [-d|--debug] [-v|--verbose] [-s|--signal] [-r|--rr] [-p|--prompt] [-m|--mlockall] [-u|--uniprocessor]
pi_stress -h|--help

DESCRIPTION

pi_stress is a program used to stress the priority-inheritance code paths for POSIX mutexes, in both the Linux kernel and the C library. It runs as a realtime-priority task and launches inversion machine thread groups. Each inversion group causes a priority inversion condition that will deadlock if priority inheritance doesn't work.

OPTIONS

-i n|--inversions=n
Run for n number of inversion conditions. This is the total number of inversions for all inversion groups. Default is -1 for infinite.
-t n|--duration=n
Run the test for n seconds and then terminate.
-g n|--groups=n
The number of inversion groups to run. Defaults to 10.
-d|--debug
Run in debug mode; lots of extra prints
-v|--verbose
Run with verbose messages
-s|--signal
Terminate on receipt of SIGTERM (Ctrl-C). Default is to terminate on any keypress.
-r|--rr
Run inversion group threads as SCHED_RR (round-robin). The default is to run the inversion threads as SCHED_FIFO.
-p|--prompt
Prompt before actually starting the stress test
-u|--uniprocessor
Run all threads on one processor. The default is to run all inversion group threads on one processor and the admin threads (reporting thread, keyboard reader, etc.) on a different processor.
-m|--mlockall
Call mlockall to lock current and future memory allocations and prevent being paged out
-h|--help
Display a short help message and options.

CAVEATS

The pi_stress test threads run as SCHED_FIFO or SCHED_RR threads, which means that they can starve critical system threads. It is advisable to change the scheduling policy of critical system threads to be SCHED_FIFO prior to running pi_stress and use a priority of 10 or higher, to prevent those threads from being starved by the stress test.

BUGS

No documented bugs.

AUTHOR

Clark Williams <[email protected]>