pymvpa2-mkevds(1) extract (multi-sample) events from a dataset

SYNOPSIS

pymvpa2 ,mkevds /[,--version/] [,-h/] ,-i DATASET /[,DATASET /...] [,--event-attrs ATTR /[,ATTR /...] ,| --onsets /[,TIME /[,TIME /...]] ,| --csv-events FILENAME | --fsl-ev3 FILENAME /[,FILENAME /...]] [,--time-attr ATTR/] [,--onset-column ATTR/] [,--offset VALUE/] [,--duration VALUE/] [,--match-strategy {prev,next,closest}/] [,--event-compression {mean,median,min,max}/] [,--add-sa VALUE /[,VALUE /...]] [,--add-fa VALUE /[,VALUE /...]] [,--add-sa-txt VALUE /[,VALUE /...]] [,--add-fa-txt VALUE /[,VALUE /...]] [,--add-sa-attr FILENAME/] [,--add-sa-npy VALUE /[,VALUE /...]] [,--add-fa-npy VALUE /[,VALUE /...]] ,-o OUTPUT /[,--hdf5-compression TYPE/]

DESCRIPTION

Extract (multi-sample) events from a dataset

An arbitrary number of input datasets is loaded from HDF5 storage. All loaded datasets are concatenated along the samples axis. Based on information about onset and duration of a sequence of events corresponding samples are extracted from the input datasets and converted into event samples. It is possible for an event sample to consist of multiple input samples (i.e. temporal windows).

Events are defined by onset sample ID and number of consecutive samples that comprise an event. However, events can also be defined as temporal onsets and durations, which will be translated into sample IDs using time stamp information in the input datasets.

Analogous to the 'mkds' command the event-related dataset can be extended with arbitrary feature and sample attributes (one value per event for the latter).

The finished event-related dataset is written to an HDF5 file.

OPTIONS

--version
show program's version and license information and exit
-h, --help, --help-np
show this help message and exit. --help-np forcefully disables the use of a pager for displaying the help.
-i DATASET [DATASET ...], --input DATASET [DATASET ...]
path(s) to one or more PyMVPA dataset files. All datasets will be merged into a single dataset (vstack'ed) in order of specification. In some cases this option may need to be specified more than once if multiple, but separate, input datasets are required.

Options for defining events (choose one):

--event-attrs ATTR [ATTR ...]
define events as a unique combinations of values from a set of sample attributes. Going through all samples in the order in which they appear in the input dataset, onset of events are determined by changes in the combination of attribute values. The length of an event is determined by the number of identical consecutive value combinations.
--onsets [TIME [TIME ...]]
reads a list of event onsets (float) from the command line (space-separated). If this option is given, but no arguments are provided, onsets will be read from STDIN (one per line). If --time-attr is also given, onsets will be interpreted as time stamps, otherwise they are treated a integer ID of samples.
--csv-events FILENAME
read event information from a CSV table. A variety of dialects are supported. A CSV file must contain a header line with field names as a first row. The table must include an 'onset' column, and can optionally include an arbitrary number of additional columns (e.g. duration, target). All values are passed on to the event-related samples. If '-' is given as a value the CSV table is read from STDIN.
--fsl-ev3 FILENAME [FILENAME ...]
read event information from a text file in FSL's EV3 format (one event per line, three columns: onset, duration, intensity). One of more filenames can be given.

Options for modifying or converting events:

--time-attr ATTR
dataset attribute with time stamps for input samples. Onset and duration for all events will be converted using this information. All values are assumed to be of the same units.
--onset-column ATTR
name of the column in the CSV event table that indicates event onsets
--offset VALUE
fixed uniform event offset for all events. If no --time-attr option is given, this value indicates the number of input samples all event onsets shall be shifted. If --time-attr is given, this is treated as a temporal offset that needs to be given in the same unit as the time stamp attribute (see --time-attr).
--duration VALUE
fixed uniform duration for all events. If no --timeattr option is given, this value indicates the number of consecutive input samples following an onset that belong to an event. If --time-attr is given, this is treated as a temporal duration that needs to be given in the same unit as the time stamp attribute (see --time-attr).
--match-strategy {prev,next,closest}
strategy used to match time-based onsets to sample indices. 'prev' chooses the closes preceding samples, 'next' the closest following sample and 'closest' to absolute closest sample. Default: 'prev'
--event-compression {mean,median,min,max}
specify whether and how events spanning multiple input samples shall be compressed. A number of methods can be chosen. Selecting, for example, 'mean' will yield the mean of all relevant input samples for an event. By default (when this option is not given) an event will comprise of all concatenated input samples.

Options for attributes from the command line:

--add-sa VALUE [VALUE ...]
compose a sample attribute from the command line input. The first value is the desired attribute name, the second value is a comma-separated list (appropriately quoted) of actual attribute values. An optional third value can be given to specify a data type. Additional information on defining dataset attributes on the command line are given in the section "Compose attributes on the command line.
--add-fa VALUE [VALUE ...]
compose a feature attribute from the command line input. The first value is the desired attribute name, the second value is a comma-separated list (appropriately quoted) of actual attribute values. An optional third value can be given to specify a data type. Additional information on defining dataset attributes on the command line are given in the section "Compose attributes on the command line.

Options for attributes from text files:

--add-sa-txt VALUE [VALUE ...]
load sample attribute from a text file. The first value is the desired attribute name, the second value is the filename the attribute will be loaded from. Additional values modifying the way the data is loaded are described in the section "Load data from text files".
--add-fa-txt VALUE [VALUE ...]
load feature attribute from a text file. The first value is the desired attribute name, the second value is the filename the attribute will be loaded from. Additional values modifying the way the data is loaded are described in the section "Load data from text files".
--add-sa-attr FILENAME
load sample attribute values from an legacy 'attributes file'. Column data is read as "literal". Only two column files ('targets' + 'chunks') without headers are supported. This option allows for reading attributes files from early PyMVPA versions.

Options for attributes from stored Numpy arrays:

--add-sa-npy VALUE [VALUE ...]
load sample attribute from a Numpy .npy file. Compressed files (i.e. .npy.gz) are supported as well. The first value is the desired attribute name, the second value is the filename the data will be loaded from. Additional values modifying the way the data is loaded are described in the section "Load data from Numpy NPY files".
--add-fa-npy VALUE [VALUE ...]
load feature attribute from a Numpy .npy file. Compressed files (i.e. .npy.gz) are supported as well. The first value is the desired attribute name, the second value is the filename the data will be loaded from. Additional values modifying the way the data is loaded are described in the section "Load data from Numpy NPY files".

Output options:

-o OUTPUT, --output OUTPUT
output filename ('.hdf5' extension is added automatically if necessary). NOTE: The output format is suitable for data exchange between PyMVPA commands, but is not recommended for long-term storage or exchange as its specific content may vary depending on the actual software environment. For long-term storage consider conversion into other data formats (see 'dump' command).
--hdf5-compression TYPE
compression type for HDF5 storage. Available values depend on the specific HDF5 installation. Typical values are: 'gzip', 'lzf', 'szip', or integers from 1 to 9 indicating gzip compression levels.

EXAMPLES

Extract two events comprising of four consecutive samples from a dataset.
$ pymvpa2 mkevds --onsets 3 9 --duration 4 -o evds.hdf5 -i 'mydata*.hdf5'

AUTHOR

Written by Michael Hanke & Yaroslav Halchenko, and numerous other contributors.

COPYRIGHT

Copyright © 2006-2016 PyMVPA developers

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.