DESCRIPTION
Crawl online resource to create or update a dataset.
Examples:
$ datalad crawl # within a dataset having .datalad/crawl/crawl.cfg
OPTIONS
file configuration (or pipeline if --is-pipeline) file
defining crawling, or a directory of a dataset on
which to perform crawling using its standard crawling
specification. Constraints: value must be a string
[Default: None]
--version show the program's version and license information and
exit
-h, --help, --help-np
show this help message and exit. --help-np forcefully
disables the use of a pager for displaying the help
message
-l {critical,error,warning,info,debug,1,2,3,4,5,6,7,8,9}, --log-level {critical,error,warning,info,debug,1,2,3,4,5,6,7,8,9}
level of verbosity. Integers provide even more
debugging information
-p {condor}, --pbs-runner {condor}
execute command by scheduling it via available PBS.
For settings, config file will be consulted
-n, --dry-run
flag if file manipulations to be invoked (e.g., adding
to git/annex). If not, commands are only printed to
the stdout. [Default: False]
--is-pipeline flag if provided file is a Python script which defines
pipeline(). [Default: False]
-t, --is-template
flag if provided value is the name of the template to
use. [Default: False]
-C CHDIR, --chdir CHDIR
directory to chdir to for crawling. Constraints: value
must be a string [Default: None]