Description of the different events generated by the ZFS stack.
Most of these don't have any description. The events generated by ZFS have never been publicly documented. What is here is intended as a starting point to provide documentation for all possible events.
To view all events created since the loading of the ZFS infrastructure (i.e, "the module"), run
to get a short list, and
zpool events -v
to get a full detail of the events and what information is available about it.
This man page lists the different subclasses that are issued in the case of an event. The full event name would be ereport.fs.zfs.SUBCLASS, but we only list the last part here.
- Issued when a checksum error have been detected.
- Issued when there is an I/O error in a vdev in the pool.
- Issued when there have been data errors in the pool.
- Issued when an I/O was slow to complete as defined by the zio_delay_max module option.
- Issued every time a vdev change have been done to the pool.
- Issued when a pool cannot be imported.
- Issued when a pool is destroyed.
- Issued when a pool is exported.
- Issued when a pool is imported.
- Issued when a REGUID (new unique identifier for the pool have been regenerated) have been detected.
- Issued when the vdev is unknown. Such as trying to clear device errors on a vdev that have failed/been kicked from the system/pool and is no longer available.
- Issued when a vdev could not be opened (because it didn't exist for example).
- Issued when corrupt data have been detected on a vdev.
- Issued when there are no more replicas to sustain the pool. This would lead to the pool being DEGRADED.
- Issued when a missing device in the pool have been detected.
- Issued when the system (kernel) have removed a device, and ZFS notices that the device isn't there any more. This is usually followed by a probe_failure event.
- Issued when the label is OK but invalid.
- Issued when the ashift alignment requirement has increased.
- Issued when a vdev is detached from a mirror (or a spare detached from a vdev where it have been used to replace a failed drive - only works if the original drive have been readded).
- Issued when clearing device errors in a pool. Such as running zpool clear on a device in the pool.
- Issued when a check to see if a given vdev could be opened is started.
- Issued when a spare have kicked in to replace a failed device.
- Issued when a vdev can be automatically expanded.
- Issued when there is an I/O failure in a vdev in the pool.
- Issued when a probe fails on a vdev. This would occur if a vdeev have been kicked from the system outside of ZFS (such as the kernel have removed the device).
- Issued when the intent log cannot be replayed. The can occur in the case of a missing or damaged log device.
- Issued when a resilver is started.
- Issued when the running resilver have finished.
- Issued when a scrub is started on a pool.
- Issued when a pool have finished scrubbing.
This is the payload (data, information) that accompanies an event.
For zed(8), these are set to uppercase and prefixed with ZEVENT_.
- Pool name.
- Failmode - wait, continue or panic. See pool(8) (failmode property) for more information.
- The GUID of the pool.
- The load state for the pool (0=none, 1=open, 2=import, 3=tryimport, 4=recover 5=error).
- The GUID of the vdev in question (the vdev failing or operated upon with zpool clear etc).
- Type of vdev - disk, file, mirror etc. See zpool(8) under Virtual Devices for more information on possible values.
- Full path of the vdev, including any -partX.
- ID of vdev (if any).
- Physical FRU location.
- State of vdev (0=uninitialized, 1=closed, 2=offline, 3=removed, 4=failed to open, 5=faulted, 6=degraded, 7=healty).
- The ashift value of the vdev.
- The time the last I/O completed for the specified vdev.
- The time since the last I/O completed for the specified vdev.
- List of spares, including full path and any -partX.
- GUID(s) of spares.
- How many read errors that have been detected on the vdev.
- How many write errors that have been detected on the vdev.
- How many checkum errors that have been detected on the vdev.
- GUID of the vdev parent.
- Type of parent. See vdev_type.
- Path of the vdev parent (if any).
- ID of the vdev parent (if any).
- The object set number for a given I/O.
- The object number for a given I/O.
- The block level for a given I/O.
- The block ID for a given I/O.
- The errno for a failure when handling a given I/O.
- The offset in bytes of where to write the I/O for the specified vdev.
- The size in bytes of the I/O.
- The current flags describing how the I/O should be handled. See the I/O FLAGS section for the full list of I/O flags.
- The current stage of the I/O in the pipeline. See the I/O STAGES section for a full list of all the I/O stages.
- The valid pipeline stages for the I/O. See the I/O STAGES section for a full list of all the I/O stages.
- The time in ticks (HZ) required for the block layer to service the I/O. Unlike zio_delta this does not include any vdev queuing time and is therefore solely a measure of the block layer performance. On most modern Linux systems HZ is defined as 1000 making a tick equivalent to 1 millisecond.
- The time when a given I/O was submitted.
- The time required to service a given I/O.
- The previous state of the vdev.
- The expected checksum value.
- The actual/current checksum value.
- Checksum algorithm used. See zfs(8) for more information on checksum algorithms available.
- Checksum value is byte swapped.
- Checksum bad offset ranges.
- Checksum allowed minimum gap.
- Checksum for each range the number of bits set.
- Checksum for each range the number of bits cleared.
- Checksum array of bits set.
- Checksum array of bits cleared.
- Checksum histogram of set bits by bit number in a 64-bit word.
- Checksum histogram of cleared bits by bit number in a 64-bit word.
The ZFS I/O pipeline is comprised of various stages which are defined below. The individual stages are used to construct these basic I/O operations: Read, Write, Free, Claim, and Ioctl. These stages may be set on an event to describe the life cycle of a given I/O.
Every I/O in the pipeline contains a set of flags which describe its function and are used to govern its behavior. These flags will be set in an event as an zio_flags payload entry.