ppmd [e|d] [switches] filename...|wildcard...
It is written for embedding in user programs mainly and it is not intended for immediate use. I was interested in speed and performance improvements of abstract PPM model [1-6] only, without tuning it to particular data types, therefore compressor works good enough for texts, but it is not so good for nonhomogeneous files (executables) and for noisy analog data (sounds, pictures etc.). Program is very memory consuming, you can choose balance between execution speed and memory economy, on one hand, and compression performance, on another hand, with the help of model order selection option (-o).
- Delete file[s] after processing, default: disabled.
- Silent mode.
- Set output file name to Name.
- Use N MB memory - [1,256], default: 10. The PPMII algorithm might need a lot of memory, especially when used on large files and/or used with large model order. If ppmd needs more memory than you give it, the compression will be worse. The exact effect is dependent on the -r option.
- Set model order to N - [2,16], default: 4. Bigger model orders almost surely results in better compression and surely more memory and CPU usage.
Methods of restoration of model correctness at memory insufficiency:
'-r0 - restart model from scratch'. This method is not optimal for any type of data sources, but it works fast and efficient in average, so it is the recommended method (default).
'-r1 - cut off model'. This method is optimal for quasistationary sources when the period of stationarity is much larger than period between cutoffs. As a rule, it gives better results, but it is slower than other methods and it is unstable against fragmentation of memory heap at high model orders and low memory.
'-r2 - freeze model'. This method is optimal for stationary sources (show me such source when You will find it ;-)). It is fast and efficient for such sources.
To run this program the standard way type:
ppmd e /tmp/myfile
Alternatively you can run it as:
ppmd -e -o 16 /tmp/myfile
PPMd was written by Dmitry Shkarin <[email protected]> and Dmitry Subbotin.