planetsplitter(1) split OSM XML data into routino database

SYNOPSIS

planetsplitter [--help] [--dir=dirname] [--prefix=name] [--sort-ram-size=size] [--sort-threads=number] [--tmpdir=dirname] [--tagging=filename] [--loggable] [--logtime] [--logmemory] [--errorlog[=name]] [--parse-only | --process-only] [--append] [--keep] [--changes] [--max-iterations=number] [--prune-none] [--prune-isolated=len] [--prune-short=len] [--prune-straight=len] [filename.osm ... | filename.osc ... | filename.pbf ... | filename.o5m ... | filename.o5c ... | filename.(o5m|osc|os5m|o5c).bz2 ... | filename.(o5m|osc|os5m|o5c).gz ... | filename.(o5m|osc|os5m|o5c).xz ...]

DESCRIPTION

planetsplitter reads in the OSM format XML file and splits it up to create the routino database that is for routing.

OPTIONS

--help
Prints usage information
--dir=dirname
Sets the directory name in which to save the results. Defaults to the current directory.
--prefix=name
Sets the filename prefix for the files that are created. Defaults to no prefix.
--sort-ram-size=size
Specifies the amount of RAM (in MB) to use for sorting the data. If not specified then 64 MB will be used in slim mode or 256 MB otherwise.
--sort-threads=number
The number of threads to use for data sorting (the sorting memory is shared between the threads \- too many threads and not enough memory will reduce the performance).
--tmpdir=dirname
Specifies the name of the directory to store the temporary disk files. If not specified then it defaults to either the value of the --dir option or the current directory.
--tagging=filename
Sets the filename containing the list of tagging rules in XML format for the parsing the input files. If the file doesn't exist then dirname, prefix and "tagging.xml" will be combined and used, if that doesn't exist then the file /usr/share/routino/tagging.xml will be used.
--loggable
Print progress messages that are suitable for logging to a file; normally an incrementing counter is printed which is more suitable for real-time display than logging.
--logtime>
Print the elapsed time for each processing step (minutes, seconds and milliseconds).
--logmemory
Print the maximum allocated and mapped memory for each processing step (MBytes).
--errorlog[=name]
Log OSM parsing and processing errors to error.log or the specified file name (the --dir and --prefix options are applied). If the --append option is used then the existing log file will be appended, otherwise a new one will be created. If the --keep option is also used a geographically searchable database of error logs is created for use in the visualiser.
--parse-only
Parse the input files and store the data in intermediate files but don't process the data into a routing database. This option must be used with the --append option for all except the first file.
--process-only
Don't read in any files but process the existing intermediate files created by using the --parse-only option.
--append
Parse the input file and append the result to the existing intermediate files; the appended file can be either an OSM file or an OSC change file.
--keep
Store a set of intermediate files after parsing the OSM files, sorting and removing duplicates; this allows appending an OSC file and re-processing later.
--changes
This option indicates that the data being processed contains one or more OSC (OSM changes) files, they must be applied in time sequence if more than one is used. This option implies --append when parsing data files and --keep when processing data.
--max-iterations=number
The maximum number of iterations to use when generating super-nodes and super-segments. Defaults to 5 which is normally enough.
--prune-none
Disable the prune options below, they can be re-enabled by adding them to the command line after this option.
--prune-isolated=length
Remove the access permissions for a transport type from small disconnected groups of segments and remove the segments if they end up with no access permission (defaults to removing groups under 500m).
--prune-short=length
Remove short segments (defaults to removing segments up to a maximum length of 5m).
--prune-straight==length
Remove nodes in almost straight highways (defaults to removing nodes up to 3m offset from a straight line).
filename.osm, filename.osc, filename.pbf, filename.o5m, filename.o5c
Specifies the filename(s) to read data from. Filenames ending '.pbf' will be read as PBF, filenames ending in '.o5m' or '.o5c' will be read as O5M/O5C, otherwise as XML. Filenames ending '.bz2' will be bzip2 uncompressed (if bzip2 support compiled in). Filenames ending '.gz' will be gzip uncompressed (if gzip support compiled in). Filenames ending '.xz' will be xz uncompressed (if xz support compiled in).

Note: In version 2.5 of Routino the ability to read data from the standard input has been removed. This is because there is now the ability to read compressed files (bzip2, gzip, xz) and PBF files directly. Also using standard input the file type cannot be auto-detected from the filename.

EXAMPLES

Example usage 1:


planetsplitter --dir=data --prefix=gb great_britain.osm
      

This will generate the output files data/gb-nodes.mem, data/gb-segments.mem and data/gb-ways.mem. Multiple filenames can be specified on the command line and they will all be read in, combined and processed together.

Example usage 2:


planetsplitter --dir=data --prefix=gb --parse-only          great_britain_part1.osm
planetsplitter --dir=data --prefix=gb --parse-only --append great_britain_part2.osm
planetsplitter --dir=data --prefix=gb --parse-only --append ...
planetsplitter --dir=data --prefix=gb --process-only
      

This will generate the same output files as the first example but parsing the input files is performed separately from the data processing. The first file read in must not use the --append option but the later ones must.

Example usage 3:


planetsplitter --dir=data --prefix=gb --keep    great_britain.osm

planetsplitter --dir=data --prefix=gb --changes great_britain.osc
      

This will generate the same output files as the first example. The first command will process the complete file and keep some intermediate data for later. The second command will apply a set of changes to the stored intermediate data and keep the updated intermediate files for repeating this step later with more change data.

The parsing and processing can be split into multiple commands as it was in example 2 with the --keep option used with --process-only for the initial OSM file(s) and the --changes option used with --parse-only or --process-only for every OSC file.