mfsmaster.cfg(5) main configuration file for mfsmaster


The file mfsmaster.cfg contains configuration of LizardFS metadata server process.


Syntax is:


Lines starting with # character are ignored.


Configuration options:


Current personality of this instance of metadata server. Valid values are master, shadow and ha-cluster-managed. If installation is managed by an HA cluster the only valid value is ha-cluster-managed, otherwise the only valid values are master and shadow, in which case only one metadata server in LizardFS shall have master personality.

PERSONALITY = master means that this instance of metadata server acts as main metadata server govering all file system metadata modifications.

PERSONALITY = shadow means that this instance of metadata server acts as backup metadata server ready for immediate deployment as new master in case of current master failure.

Metadata server personality can be changed at any moment as long as one changes personality from shadow to master, changing personality the other way around is forbidden.

PERSONALITY = ha-cluster-managed means that this instance is managed by HA cluster, server runs in shadow mode as long as its not remotly promoted to master.


where to store metadata files and lock file


user to run daemon as


group to run daemon as (optional - if empty then default user group will be used)


name of process to place in syslog messages (default is mfsmaster)


whether to perform mlockall() to avoid swapping out mfsmaster process (default is 0, i.e. no)


nice level to run daemon with (default is -19 if possible; note: process must be started as root to increase priority)


alternative name of mfsexports.cfg file


alternative name of mfstopology.cfg file


alternative name of mfsgoals.cfg file


number of metadata change log files (default is 50)


number of previous metadata files to be kept (default is 1)


when this option is set (equals 1) master will try to recover metadata from changelog when it is being started after a crash; otherwise it will refuse to start and mfsmetarestore should be used to recover the metadata (default is 0)


initial delay in seconds before starting replications (default is 300)


replication delay in seconds after chunkserver disconnection (default is 3600)


IP address to listen on for metalogger connections (* means any)


port to listen on for metalogger connections (default is 9419)


how many seconds of change logs have to be preserved in memory (default is 600; note: logs are stored in blocks of 5k lines, so sometimes real number of seconds may be little bigger; zero disables extra logs storage)


IP address to listen on for chunkserver connections (* means any)


port to listen on for chunkserver connections (default is 9420)


IP address to listen on for client (mount) connections (* means any)


port to listen on for client (mount) connections (default is 9421)


IP address to listen on for tapeserver connections (* means any)


Port to listen on for tapeserver connections (default is 9424)


Chunks loop shouldn't check more chunks per seconds than given number (default is 100000)


Chunks loop will check all chunks in specified time (default is 300) unless CHUNKS_LOOP_MAX_CPS will force slower execution.


Time in milliseconds between chunks loop execution (default is 1000).


Hard limit on CPU usage by chunks loop (percentage value, default is 60).


Soft maximum number of chunks to delete on one chunkserver (default is 10)


Hard maximum number of chunks to delete on one chunkserver (default is 25)


Maximum number of chunks to replicate to one chunkserver (default is 2)


Maximum number of chunks to replicate from one chunkserver (default is 10)


Percentage of endangered chunks that should be replicated with high priority. Example: when set to 0.2, up to 20% of chunks served in one turn would be extracted from endangered priority queue. When set to 1 (max), no other chunks would be processed as long as there are any endangered chunks in the queue (not advised) (default is 0, i.e. there is no overhead for prioritizing endangered chunks).


Max capacity of endangered chunks queue. This value can limit memory usage of master server if there are lots of endangered chunks in the system. This value is ignored if ENDANGERED_CHUNKS_PRIORITY is set to 0. (default is 1Mi, i.e. no more than 1Mi chunks will be kept in a queue).


A maximum difference between disk usage on chunkservers that doesn't trigger chunk rebalancing (default is 0.1, i.e. 10%).


When balancing disk usage, allow moving chunks between servers with different labels (default is 0, i.e. chunks will be moved only between servers with the same label).


Interval of removing files from trash in seconds (default is 300)


Interval of removing reserved files in seconds (default is 60)


Reject mfsmounts older than 1.6.0 (0 or 1, default is 0). Note that mfsexports access control is NOT used for those old clients.


Configuration of global I/O limits (default is no I/O limiting)


How often mountpoints will request bandwidth allocations under constant, predictable load (default is 0.1)


After inactivity, no waiting is required to transfer the amount of data equivalent to normal data flow over the period of that many milliseconds (default is 250)


how often metadata checksum shall be sent to backup servers (default is: every 50 metadata updates)


how fast should metadata be recalculated in background (default : 100 objects per function call)


should checksum verification be disabled while applying changelog


when this option is set to 1 inode access time is not updated on every access, otherwise (when set to 0) it is updated (default is 0)


minimal time in seconds between metadata dumps caused by requests from shadow masters (default is 1800)


Time in seconds for which client session data (e.g. list of open files) should be sustained in the master server after connection with the client was lost. Values between 60 and 604800 (one week) are accepted. (default is 86400)

Options below are mandatory for all Shadow instances:


address of the host running LizardFS metadata server that currently acts as master


port number where LizardFS metadata server currently running as master listens for connections from 'shadow's and metaloggers (default is 9420)


delay in seconds before trying to reconnect to metadata server after disconnection (default is 1)


timeout (in seconds) for metadata server connections (default is 60)


Chunks in master are tested in loop. Speed (or frequency) is regulated by two options CHUNKS_LOOP_MIN_TIME and CHUNKS_LOOP_MAX_CPS. First defines minimal time of the loop and second maximal number of chunk tests per second. Typically at the beginning, when number of chunks is small, time is constant, regulated by CHUNK_LOOP_MIN_TIME, but when number of chunks became bigger then time of loop can increase according to CHUNKS_LOOP_MAX_CPS.

Deletion limits are defined as soft and hard limit. When number of chunks to delete increases from loop to loop then current limit can be temporary increased above soft limit, but never above hard limit.


Copyright 2008-2009 Gemius SA, 2013-2015 Skytechnology sp. z o.o.

LizardFS is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, version 3.

LizardFS is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with LizardFS. If not, see