queue_mover(1) PgQ consumer that copies data from one queue to another.


queue_mover.py [switches] config.ini


queue_mover is PgQ consumer that transports events from source queue into target queue. One use case is when events are produced in several databases then queue_mover is used to consolidate these events into single queue that can then be processed by consumers who need to handle theses events. For example in case of patitioned databases it's convenient to move events from each partition into one central queue database and then process them there. That way configuration and dependancies of partiton databases are simpler and more robust. Another use case is to move events from OLTP database to batch processing server.

Transactionality: events will be inserted as one transaction on target side. That means only batch_id needs to be tracked on target side.


Basic PgQ setup and usage can be summarized by the following steps:

1. PgQ must be installed both in source and target databases. See pgqadm man page for details.

2. Target database must also have pgq_ext schema installed. It is used to keep sync between two databases.

3. Create a queue_mover configuration file, say qmover_sourceq_to_targetdb.ini

4. create source and target queues

$ pgqadm.py sourcedb_ticker.ini create <srcqueue>
$ pgqadm.py targetdb_ticker.ini create <dstqueue>

5. launch queue mover in daemon mode

$ queue_mover.py -d qmover_sourceq_to_targetdb.ini

6. start producing and consuming events


Common configuration parameters


Name for particulat job the script does. Script will log under this name to logdb/logserver. The name is also used as default for PgQ consumer name. It should be unique.


Location for pid file. If not given, script is disallowed to daemonize.


Location for log file.


If continuisly running process, how long to sleep after each work loop, in seconds. Default: 1.


Close and reconnect older database connections.


Number of log files to keep. Default: 3


Max size for one log file. File is rotated if max size is reached. Default: 10485760 (10M)


If set, search for [./skylog.ini, ~/.skylog.ini, /etc/skylog.ini]. If found then the file is used as config file for Pythons logging module. It allows setting up fully customizable logging setup.

Common PgQ consumer parameters


Queue name to attach to. No default.


Consumers ID to use when registering. Default: %(job_name)s

queue_mover parameters


Source database.


Target database.


Target queue name.

Example config file

job_name = eventlog_to_target_mover
src_db = dbname=sourcedb
dst_db = dbname=targetdb
pgq_queue_name = eventlog
dst_queue_name = copy_of_eventlog
pidfile = log/%(job_name)s.pid
logfile = pid/%(job_name)s.log


Following switches are common to all skytools.DBScript-based Python programs.

-h, --help

show help message and exit

-q, --quiet

make program silent

-v, --verbose

make program more verbose

-d, --daemon

make program go background

Following switches are used to control already running process. The pidfile is read from config then signal is sent to process id specified there.

-r, --reload

reload config (send SIGHUP)

-s, --stop

stop program safely (send SIGINT)

-k, --kill

kill program immidiately (send SIGTERM)


Event ID is not kept on target side. If needed is can be kept, then event_id seq at target side need to be increased by hand to inform ticker about new events.