# Large-Text-Compression-Benchmark

Large Text Compression Benchmark

Matt Mahoney
Last update: Mar. 9, 2020. history

This competition ranks lossless data compression programs by the compressed size (including the size of the decompression program) of the first 109 bytes of the XML text dump of the English version of Wikipedia on Mar. 3, 2006. About the test data.

The goal of this benchmark is not to find the best overall compression program, but to encourage research in artificial intelligence and natural language processing (NLP). A fundamental problem in both NLP and text compression is modeling: the ability to distinguish between high probability strings like recognize speech and low probability strings like reckon eyes peach. Rationale.

This is an open benchmark. Anyone may contribute results. Please read the rules first.

Open source compression improvements to this benchmark with certain hardware restrictions may be eligible for the Hutter Prize.

## Benchmark Results

Compressors are ranked by the compressed size of enwik9 (109 bytes) plus the size of a zip archive containing the decompresser. Options are selected for maximum compression at the cost of speed and memory. Other data in the table does not affect rankings. This benchmark is for informational purposes only. There is no prize money for a top ranking. Notes about the table:

• Program: The version believed to give the best compression. A | denotes a combination of 2 programs.

• Compression options: selected for what I believe gives the best compression.

• enwik8: compressed size of first 108 bytes of enwik9. This data is used for the Hutter Prize, and is also ranked here but has no effect on this ranking.

• enwik9: compressed size of first 109 bytes of enwiki-20060303-pages-articles.xml.

• decompresser size:

size of a zip archive containing the decompression program (source code or executable) and all associated files needed to run it (e.g. dictionaries). A letter following the size has the following meaning:

• x = executable size.

• s = source code size (if available and smaller).

• d = size of a separate decompression program (separate from compression). For self extracting archives (SFX), the size is 0 because the decompresser and compressed data are combined into one file.

For testing, if no zip file is supplied I create archives using InfoZIP 2.32 -9. (Prior to July 1, 2008 I used 7zip 4.32 -tzip -mx=9).

• Total size: total size of compressed enwik9 + decompresser size, ranked smallest to largest.

• Comp: compression rate in nanoseconds per byte on the largest file tested (e.g. seconds for enwik9). Speed is approximate and has no effect on ranking. A ~ means “very approximate”. Not all tests are done on the same computer. Times reported are the smaller of process time (summed over processors if multi-threaded) or real time as measured with timer). If there is no note then the program was tested on a Compaq Presario 5440, 2.188 GHz, Athlon-64 3500+ in 32 bit Windows XP. An underlined time means that no better compressor is faster.

• Decomp: decompression time as above. If blank, decompression was not tested yet and ranking is pending verification that the output is identical. An underlined time means that no better compressor is faster.

• Mem: approximate memory used for compression in MB. Decompression uses the same or possibly less. There is some ambiguity whether a megabyte means 106 bytes or 220 bytes. The approximation is course enough that it doesn’t matter. I use peak memory as measured with Windows Task Manager during compression (so if you really want to know, 1 MB = 1,024,000 bytes :) Memory does not include swap or temporary files. An underlined value means that no better compressor uses less memory.

• Alg:

compression algorithm, referring to the method of parsing the input into symbols (strings, bytes, or bits) and estimating their probabilities (modeling) for choosing code lengths. Symbols may be arithmetic coded (fractional bit length for best compression), Huffman coded (bit aligned for speed), or byte aligned as a preprocessing step.

• Dict (Dictionary). Symbols are words, coded as 1 or 2 bytes, usually as a preprocessing step.

• LZ (Lempel Ziv). Symbols are strings.

• LZ77: repeated strings are coded by offset and length of previous occurrence.
• LZW (LZ Welch): repeats are coded as indexes into a dynamically built dictionary.
• ROLZ (Reduced Offset LZ): LZW with multiple small dictionaries selected by context.
• LZP (LZ predictive): ROLZ with a dictionary size of 1.
• on (Order-n, e.g. o0, o1, o2…): symbols are bytes, modeled by frequency distribution in context of last n bytes.

• PPM (Prediction by Partial Match): order-n, modeled in longest context matched, but dropping to lower orders for byte counts of 0.

• SR (Symbol Ranking): order-n, modeled by time since last seen.

• BWT (Burrows Wheeler Transform): bytes are sorted by context, then modeled by order-0 SR.

• ST (Sort Transform): BWT using stable sort with truncated string comparison.

• DMC (Dynamic Markov Coding): bits modeled by PPM.

• CM (Context Mixing): bits, modeled by combining predictions of independent models.

Some compressors combine multiple steps such as Dict+PPM or LZP+DMC. I indicate the last stage before coding.

• Notes: Brief notes. See program descriptions for details. Usually this means the result was reported by somebody else on a different computer.

set 限制解除

### Fails on enwik9

Programs that properly decompress enwik8 and don’t use external dictionaries are still eligible for the Hutter Prize.

### Testing not yet completed

Pareto frontier: compressed size vs. compression time as of Aug. 18, 2008 from the main table (options for maximum compression).

Pareto frontier: compressed size vs. memory as of Aug. 18, 2008 (options for maximum compression).

### Notes about compressors

I only test the latest supported version of a program. I attempt to find the options that select the best compression, but will not generally do an exhausitve search. If an option advertises maximum compression or memory, I don’t try the alternatives. If you know of a better combination, please let me know. I will select the maximum memory setting that does not cause disk thrashing, usually about 1800 MB. If the compressor is not downloadable as a zip file then I will compress the source or executable (whichever archive is smaller) plus any other needed files (dictionaries) into a single zip archive using 7zip 4.32 -tzip -mx=9. If no executable is available I will attempt to compile in C or C++ (MinGW 3.4.2, Borland 5.5 or Digital Mars), Java 1.5.0, MASM, NASM, or gas.

\1. Reported by Guillermo Gabrielli, May 16, 2006. Timed on a Celeron D325 2.53Ghz Windows XP SP2 256MB RAM.
\2. Decompression size and time for pkzip 2.0.4. kzip only compresses.
\3. Reported by Ilia Muraviev (author of PX, TC, pimple), June 10-July 18, 2006. Timed on a P4 3.0 GHz, 1GB RAM, WinXP SP2.
\4. enwik9 reported by Johan de Bock, May 19, 2006. Timed on Intel Pentium-4 2.8 GHz 512KB L2-cache, 1024MB DDR-SDRAM.
\5. Compressed with paq8h (VC++ compile) and decompressed with paq-8h (Intel compile of same source code). Normally compression and decompression are the same speed.
\6. ocamyd 1.65.final and LTCB 1.0 reported by Mauro Vezzosi, May 30-June 20, 2006. Timed on a 1.91 GHz AMD Athlon XP 2600+, 512 MB, WinXP Pro 2002 SP2 using timer 3.01. ocamyd 1.66.final reported Feb. 3, 2007. Times are process times.
\7. Under development by Mauro Vezzosi, May 24, 2006.
\8. Reported by Denis Kyznetsov (author of qazar), June 2, 2006.
\9. Reported by sportman, May 24, 2006. Timed on a Intel Pentium D 830 dual core 3.0GHz, 2 x 512MB DDR2-SDRAM PC4300 533Mhz memory timing 4-4-4-12 (833.000KB free), Windows XP Home SP2. CPU was at 52% so apparently only one of 2 cores was used. Decompression verified on enwik8 only (not timed, about 2.5 hours). WinRK compression options: Model size 800MB, Audio model order: 255, Bit-stream model order: 27, Use text dictionary: Enabled, Fast analyses: Disabled, Fast executable code compression: Disabled
\10. Reported by Malcolm Taylor (author of WinRK), May 24, 2006. Timed on an Athlon X2 4400+ with 2GB, running WinXP 64. Decompression not tested. decompresser size is based on SFX stub size reported by Artyom (A.A.Z.), Sept. 2, 2007, although it was not tested this way.
\11. Reported by sportman, May 25, 2006. CPU as in note 9.
\12. Reported by sportman, May 30, 2006. CPU as in 9 (50% utilized).
\13. xwrt 3.2 options are -2 -b255 -m250 -s -f64. ppmonstr J options are -o10 -m1650.
\14. Reported by Michael A Maniscalco, June 15, 2006.
\15. Reported by Jeremiah Gilbert on the Hutter group, Aug. 18, 2006. Tested under Linux on a dual Xeon 1.6 GHz(lv) (overclocked to 2.13 GHz) with 2 GB memory. Time is user+sys (real=196500 B/ns).
\16. Reported by Anthony Williams, Aug. 19-22. 2006. Timed on a 2.53 GHz Pentium 4 with 512 MB under WinXP Home SP2.
\17. Tested Aug. 20, 2006 under Ubuntu Linux 2.6.15 on a 2.2 GHz Athlon-64 with 2 GB memory. Time is approximate wall time due to disk thrashing. User+sys time is 153600 ns/byte compress, 148650 decompress.
\18. Reported by Dmitry Shkarin (author of durilca4linux), Aug. 22-23, 2006 for durilca4linux_1; and Oct. 16-18, 2006 for durilca4linux_2. 3 GB memory usage is RAM + swap. Tested on AMD Athlon X2 4400+, 2.22 GHz, 2 GB memory under SuSE Linux AMD64 v10.0. durilca4linux_3 reported Feb. 21, 2008 using 4 GB RAM + 1 GB swap. v2 reported Apr. 22, 2008. v3 reported May 22, 2008.
\19. enwik8 confirmed by sportman, Sept. 20, 2006. Compression time 61480 ns/byte timed on a 2 x dual core (only one core active) Intel Woodcrest 2GHz with 1333MHz fsb and 4GB 667MHz CL5 memory under SiSoftware Sandra Lite 2007.SP1 (10.105). Drystone ALU 37,014 MIPS, Whetstone iSSE3 25,393 MFLOPS, Integer x8 iSSE4 220,008 it/s, Floating-point x4 iSSE2 119,227 it/s.
\20. Reported by Giorgio Tani (author of PeaZip) on Nov. 10, 2006. Tested on a MacBook Pro, Intel T2500 Core Duo CPU (one core used), with 512 MB memory under WinXP SP2. Time is combined compression and decompression.
\21. enwik9 -8 reported by sportman, Dec. 12-13, 2006. Hardware as note 19. enwik9 decompression not verified. paq8hp7 -8 enwik8 compression was reported as 16,417,650 (4 bytes longer; the size depends on the length of the input filename, which was enwik8.txt rather than enwik8). I verified enwik8 -7 and -8 decompression.
\22. paq8hp8 -8 enwik9 reported by sportman, Jan. 18, 2007. paq8hp10 -8 enwik9 on Apr. 2, 2007. paq8hp11 -8 enwik9 on May 10, 2007. paq8hp12 -8 enwik8/9 on May 20, 2007. Hardware as in note 19. Decompression verified for enwik8 only.
\23. 7zip 4.46a options were -m0=PPMd:mem=1630m:o=10 -sfx7xCon.sfx
\24. paq8o8-intel (intel compile of paq8o8) -1, paq8o8z-jun7 (DOS port of paq8o8) -1 reported by Rugxulo on Jun 10, 2008. Timed on a AMD64x2 TK-53 Tyler 1.7 GHz laptop with Vista Home Premium SP1.
\25. paq8o8z -1 enwik8 (DJGPP compile) reported by Rugxulo on Jun 17, 2008. Tested on a 2.52 Ghz P4 Northwood, no HTT, WinXP Home SP2.
\26. Tested on a Gateway M-7301U laptop with 2.0 GHz dual core Pentium T3200 (1MB L2 cache), 3 GB RAM, Vista SP1, 32 bit. Run times are similar to my older computer.
\27. enwik9 size reported by Eugene Shelwien, Mar. 5, 2009. enwik8 size and all speeds are tested as in note 26.
\28. Reported by Eugene Shelwien on a Q6600, 3.3 GHz, WinXP SP3, ramdrive: bcm 0.06 on Mar. 15, 2009, bcm 0.08 on June 1, 2009.
\29. Reported by kaitz (KZ): paq8p3 on Apr. 19, 2009, v2 on Apr. 21, 2009, paq8pxd on Jan. 21, 2012, v2 on Feb. 11, 2012, v3 on Feb. 23, 2012, v4 on Apr. 23, 2012. 2012 tests on a Core2Duo T8300 2.4 GHz, 2 GB.
\30. Reported by Sami Runsas (author of bwmonstr), July 14, 2009. Tested on an Athlon XP 2200 (Win32).
\31. Reported by Dmitry Shkarin, July 21, 2009, Nov. 12, 2009. Tested on a 3.8 GHz Q9650 with 16 GB memory under Windows XP 64bit Pro SP2. Requires msvcr90.dll.
\32. Reported by Mike Russell, Sept. 11, 2009. Tested on an 2.93 GHz Intel Q6800 with 3.5 GB memory.
\33. Reported by Con Kolivas (author of lrzip) on Nov. 27, 2009 (lrzip 0.40), Nov. 30, 2009 (lrzip 0.42), Mar. 17, 2012 (lrzip 0.612). Tested on a 3 GHz quad core Q9650, 8 GB, 64 bit debian linux.
\34. Reported by sportman, Nov. 29, 2009 (durilca’kingsize), Nov. 30, 2009 (durilca’kingsize4), Apr. 8, 2010 (bsc 1.0.0). Test hardware: 2 x 2.4GHz (overclocked at 2.53 GHz) quad core Xeon Nahalem, 24GB DDR3 1066MHz, 8 x 2TB RAID5, Windows 2008 Server R2 64bit
\35. Reported by zody on Dec. 12, 2009. Tested in Windows 7, x64, 3.6 GHz e8200, 4 GB 1066 MHz RAM.
\36. Reported by Ilia Muraviev on Dec. 16, 2009. Tested on a 2.40 GHz Core 2 Duo, DDR2-800 4GB RAM, Windows7 x64.
\37. Reported by Sami Runsas, Mar. 3, 2010. Tested under Win64 on a Q6600 at 3.0 GHz.
\38. Reported by Ilya Grebnov, Apr. 7, 2010. Tested on an Intel Core 2 Duo E8500, 8 GB memory, Windows 7.
\39. Reported by Ilya Grebnov, Apr. 8, 2010. Tested on an Intel Core 2 Quad Q9400, 8 GB memory, Windows 7. bsc 2.00 on May 3, 2010. bsc 2.2.0 on June 15, 2010.
\40. Reported by Sami Runsas, May 10, 2010. Tested on an overclocked Intel Core i7 860. nanozip 0.08a tested June 6, 2010. nanozip 0.09a on Nov. 5, 2011.
\41. lpaq9m reported by Alexander Rhatushnyak on June 9, 2010. Tested on an Intel Core i7 CPU 930 (8 core), 2.8 GHz, 2.99 GB RAM. paq8hp12any tested June 28, 2010.
\42. Reported by Michal Hajicek, June 4, 2010 on an AMD Phenom II 965, 64 bit Windows. WinRK, ppmonstr on June 14.
\43. Reported by Ilia Muraviev, June 26, 2010. Tested on a Core 2 Quad Q9300, 2.50 GHz, 4 GB DDR2, Windows 7.
\44. Timed on a Dell Latitude E6510 laptop Core I7 M620, 2.66 GHz, 4 GB, Windows 7 32-bit.
\45. Reported by Richard Geldreich (lzham author) on Aug. 30, 2010. Tested on a 2.6 GHz Core i7 (quad core + HT), 6 GB, Win7 x64.
\46. Reported by Stefan Gedo (ST author) on Oct. 14, 2010. Tested on Athlon II X4 635 2.9 GHz, 4 GB memory, Windows 7.
\47. Reported by David A. Scott on Dec. 15, 2010. Tested on a I3-370 with 6 GB DDR3 1033 MHz memory.
\48. Timed on a Dell Latitude E6510 laptop Core I7 M620, 2.66 GHz, 4 GB, Ubuntu Linux 64-bit.
\49. Tested by the author on a Q9450, 3.52 GHz = 440x8, ramdrive.
\50. Tested by the author on an Intel Core i7-2600, 3.4 GHz, Kingston 8 GB DDR3, WD VeloicRaptor 10000 RPM 600 GB SATA3, Windows 7 Ultimate SP1.
\51. Tested by Bulat Ziganshin on i7-2600, 4.6 GHz with 1600 MHz RAM (8-8-8-21-1T) and NVIDEA GeForce 560Ti at 900/2000 MHz.
\52. Tested by Michael Maniscalco on an 8 core Intel Xeon E5620, 2.40 GHz, 12 GB memory running Windows 7 Enterprise SP1, 64 bit.
\53. Tested by the author on a Core i7-2600K @ 4.6GHz, 8GB DDR3 @ 1866MHz, 240GB Corsair Force GT SSD.
\54. Tested by Piotr Tarsa on a Core 2 Duo E8400, 8 GiB RAM, Ubuntu 11.10 64-bit, OpenJDK 7.
\55. Tested by David Catt on a 64 bit Windows 7 laptop, 2.33 GHz, 4 GB, 4 cores.
\56. Reported by the author on a Athlon II X4 635 2.9 GHz, 4GB, Windows 8 Enterprise.
\57. Reported by the author on a x86_64 Athlon 64 X2 5200+ with 8 GiB of RAM running GNU/Linux 2.6.38.6-libre.
\58. Reported by the author on a 4 GHz i7-930 from ramdrive.
\59. Reported by the author on a I7-2600, 4.6 GHz, 16 GB RAM, Ubuntu 13.04.
\60. Tested by Ilia Muravyov on an Intel Core i7-3770K, 4.8 GHz, 16 GB Corsair Vengeance LP 1800 MHz CL9, Corsair Force GS 240 GB SSD, Windows 7 SP1.
\61. Tested by Matt Mahoney on a dual Xeon E-2620, 2.0 GHz, 12+12 hyperthreads, 64 GB RAM (20 GB usable), Fedora Linux.
\62. Tested by Valéry Croizier on a 2.5 GHz Core i5-2520M, 4 GB memory, Windows 7 64 bit.
\63. Tested by Ilia Muravyov on an Intel i7-3770, 4.7 GHz, Corsair Vengenance LP 1600 MHz CL9 16 GB RAM, Samsung 840 Pro 512 GB SSD, Windows 7 SP1.
\64. Tested by Kennon Conrad on a 3.2 GHz AMD A8-5500.
\65. Tested by sportman on an Intel Core i7 4960X 3.6GHz OC at 4.5GHz - 6 core (12 threads) 22nm Ivy Bridge-E, Kingston 8 x 4GB (32GB) DDR3 2400MHz 11-14-14 under clocked at 2000MHz 10-11-11. Windows 8.1 Pro 64-bit, SoftPerfect RAM Disk 3.4.5 64-bit.
\66. Tested by Byron Knoll on a Intel Core i7-3770, 31.4 GB memory, Linux Mint 14.
\67. Tested by Kennon Conrad on a 4.0 GHz i4790K, 16 GB at 1866 MHz, 128 GB SSD Windows 8.1.
\68. Tested by Ilia Muraviev on an Intel Core i7-3770K @ 4.8GHz, 8GB 2133 MHz CL11 DDR3, 512GB Samsung 840 Pro SSD, Windows 7 Ultimate SP1.
\69. Tested by Nania Francesco Antonio on a Intel Core i7 920 2.67 ghz 6GB ram.
\70. Tested by Richard Geldreich on a Core i7 Gulftown 3.3 Ghz, Win64.
\71. Tested by Christoph Diegelmann on a Core i7-4770K, 8 GB DDR3, Samsung 840Pro 128 GB, Fedora 21 64 bit, gcc 4.9.2.
\72. Tested by Skymmer on a i7-2770K, WinXP x64 SP2.
\73. Tested by Andreas M. Nilsson on a 1.7 GHz Intel Core i7, 8 GB 1600 MHz DDR3, Mac OS X 10.10.3 (14D136).
\74. Tested by Michael Crogan on a Core i7-3930K, 3.20 GHz, 6+HT, 64 MB, Linux64.
\75. Tested by Mauro Vezzosi on a Core i7-4710HQ 2.50-3.50 GHz, 8 GB DDR3, Windows 8.1 64 bit.
\76. Tested by Yann Collet on Core i7-3930K, 4.5 GHz, Linux 64, gcc 5.2.0-5.3.1.
\77. Tested by Darek on a Core i7 4900 MQ, 2.8 GHz overclocked to 3.7 GHz, 16 GB, Win7Pro 64.
\78. Tested by mpais on a Core i7 5820K 4.4 GHz, Windows 10.
\79. Tested by Sportman on2 x Intel Xeon E5-2643 v3 6 cores (12 threads) 3.4GHz, 3.7GHz turbo, 20MB L3 cache, 8 x 32GB DDR4 2133MHz CAS 15, SoftPefect RAM Disk 3.4.7, Windows Server 2012 R2 64-bit.
\80. Tested by kaitz on an Intel Celeron G1820 DDR3 8GB PC3-12800 (800 MHz).
\81. Tested by Darek on Core i7 4900MQ 2.8GHz ovwerclocked to 3.8GHz, 32GB, Win7Pro 64.
\82. Tested by Ilia Muraviev on an Intel Core i7-4790K @ 4.6GHz, 32GB @ 1866MHz DDR3 RAM, RAMDisk.
\83. Tested by Byron Knoll on an Intel Core i7-7700K, 32 GB DDR4, Ubuntu 16.04-18.04.
\84. Tested by Fabrice Bellard on 2 x Xeon E5-2640 v3 @ 2.6 GHz, 196 GB RAM, Linux.
\85. Tested by Georgi Marinov on a Windows 10 Laptop: Lenovo Ideapad 310; i5-7200u @2.5GHz; 8GB DDR4 @1066MHz (2133MHz) CL15 CR2T; L2 cache: 2x256KB; L3 cache: 3MB; SSD: Crucial MX500 500GB

I have not verified results submitted by others. Timing information, when available, may vary widely depending on the test machine used.

## About the Compressors

The numbers in the headings are the compression ratios on enwik9.

### .1159 cmix

cmix v1 is a free, open source (GPL) file compressor by Byron Knoll, Apr. 16, 2014. It is a context mixing compressor with dictionary preprocessing based on code from paq8hp12any and paq8l but increasing the number of context models and mixer layers. It takes no compression options.

cmix v2 was released May 29, 2014.

cmix v3 was released June 27, 2014.

cmix v4 was released July 22, 2014. It uses 28,976,428 KiB memory (29.7 GB).

cmix v5 was released Aug. 13, 2014. The decompressor size is a zip archive containing the source code, makefile, and a dictionary compressed with cmix from 465211 to 90065 bytes.

cmix v6 was released Sept. 3, 2014. The decompressor size includes the dictionary compressed with cmix from 465211 to 90207 bytes.

cmix v7 was released Feb. 4, 2015.

cmix v8 was released Nov. 10, 2015.

cmix v9 was released Apr. 8, 2016.

cmix v10 was released June 17, 2016.

cmix v11 was released July 3, 2016. It incorporates a modification originally developed by Eugene Shelwien in which PPMd is included as a model.

cmix v12 was released Nov. 7, 2016. It includes a LSTM model.

cmix v13 was released Apr. 24, 2017.

cmix v14 was released Nov. 22, 2017.

cmix v15 was released May 19, 2018.

cmix v16 was released Oct 6, 2018.

cmix v17 was released Mar. 24, 2019.

cmix v18 was released Aug. 2, 2019.

.1165 phda9phda 1.0 (discussion) is the public version of a winning Hutter prize submission dated Dec. 15, 2017 by Alexander Rhatushnyak. There are Windows and Linux executables, no source.

The original prize winning version is a 64 bit Linux decompressor (no source) and compressed enwik8 as a RAR archive, awarded Nov. 4, 2017, posted Aug. 12, 2019. Archive plus decompressor size is 15,284,944 bytes. It uses 1 GB memory and a 176 MB scratch file. There is a version that uses only RAM.

phda9 1.2 (discussion) was released Mar. 13, 2018.

phda9 1.3 was released Apr. 21, 2018. The decompressor size for enwik8 is different (557050 bytes) because the dictionary is loosely compressed in the decompressor instead of in the compressed file.

phda9 1.4 was released May 20, 2018. This is mainly a bug fix version.

phda9 1.5 was released Aug. 1, 2018. enwik8 uses a separate decompressor with a size of 557415 bytes.

phda9 1.6 was released Oct. 20, 2018. enwik8 uses a separate decompressor with a size of 564616 bytes.

phda9 1.7 was released Feb. 18, 2019. enwik8 uses a separate decompressor with a size of 565,352 bytes.

phda9 1.8 was released July 4, 2019. enwik8 uses a separate decompressor with a size of 558,298 bytes.

.1194 nncpnncp is a free, experimental file compressor by Fabrice Bellard, released May 8, 2019. It uses a neural network model with dictionary preprocessing described in the paper Lossless Data Compression with Neural Networks. Compression of enwik9 uses the options:

.1263 paq8pxd_v48_bwt1paq8pxd_v47 is one the latest versions in the following PAQ series of open source (GPL) context mixing archivers.

p5, p6, and p12 (Matt Mahoney, May 13, 2000) use a neural network with 256K or 4M inputs, no hidden layer and a single output to predict the next bit of input, given hashes of various contexts to select active inputs. The output is arithmetic coded. p5 uses 1 MB memory and context orders 0 to 3. p6 uses 16 MB and orders 0-5. p12 uses 16 MB, orders 1-4 and word-level orders 0-1 as an optimization for text. The programs take no options. The algorithm is described in M. Mahoney, Fast Text Compression with Neural Networks, Proc. AAAI FLAIRS, Orlando, 2000 (C) 2000, AAAI.

paq1 (Matt Mahoney, Jan. 6, 2001) replaces the neural network in p5, p6, p12 with a fixed weighted averaging of model outputs. Described in an unpublished report, M. Mahoney, The PAQ1 Data Compression Program, 2002.

paq6 (Matt Mahoney and Serge Osnach, Dec. 30, 2003) evolved as a series of improvements to paq1. It is described in M. Mahoney, Adaptive Weighing of Context Models for Lossless Data Compression, Florida Tech. Technical Report CS-2005-16, 2005. The most significant improvements are replacing the fixed model weights with adaptive linear mixing (Matt Mahoney), and SSE (secondary symbol estimation) postprocessing on the output probability, and modeling of sparse contexts (Serge Osnach). Other models were added for x86 executable code, and automatic detection of fixed length records in binary data. Intermediate versions can be found here.

paqar 4.5 (Alexander Rhatushnyak, Feb. 13, 2006) is the last of a long series of improvements to paq6 by Alexander Rhatushnyak (paqar: multimixer model, .exe preprocessor, other model improvements), Przemyslaw Skibinski (WRT text preprocessing), Berto Destasio (model tuning), Fabio Buffoni (speed optimizations), David. A Scott (arithmetic coder optimizations), Jason Schmidt (model improvements), and Johan de Bock (compiler optimizations). For text, the biggest improvement was from WRT (Word Reducing Transform), which replaces words with shorter codes from an external English dictionary to PAsQDa 1.0 on Jan. 18, 2005. WRT is described in P. Skibiński, Sz. Grabowski, and S. Deorowicz, Revisiting dictionary-based compression, Software - Practice & Experience, 35 (15), pp. 1455-1476, December 2005. There were a great number of versions by many contributors, mostly in 2004 when the PAQ series moved to the top of most compression benchmarks and attracted interest. Prior to PAQ, the top ranked programs were generally closed source.

paq8f (Matt Mahoney, Feb. 28, 2006) evolved from paq7 (Dec. 24, 2005) as a complete rewrite of paq6/paqar. The important improvements were replacing the adaptive linear mixing of models with a neural network (coded in MMX assembler), a more memory-efficient mapping of contexts to bit histories using a cache-aligned hash table, adaptive mapping of bit histories to probabilities, and models for bmp, tiff, and jpeg images. It models text using whole-word contexts and case folding, like all versions back to p12, but lacks WRT text preprocessing. It served as a baseline for the Hutter prize. Details are in the source code comments.

paq8g (Przemyslaw Skibinski, Mar. 3, 2006) adds back WRT text preprocessing.

paq8h (Alexander Rhatushnyak, Mar. 24, 2006) added additional contexts to the neural network mixer. It was top ranked on enwik9 (but not enwik8) when the Hutter prize was launched on Aug. 6, 2006. This is the 78’th version since p5.

raq8g by Rudi Cilibrasi, released 0721Z Aug. 16, 2006, is a modification of paq8f. It adds a NestModel to model nesting of parenthesis and brackets. The test below for -7 is based on a Windows compile, raq8g.exe. The test for -8 was under Linux. The unzipped Linux executable is 27,660 bytes.

paq8j by Bill Pettis, Nov. 13, 2006, is based on paq8f (no dictionary) with model improvements taken from paq8hp5. It is a general purpose compressor like paq8f, not specialized for text.

paq8ja.zip by Serge Osnach, Nov. 16, 2006, is an improvement of paq8j, using additional contexts based on character classifications.

paq8jb.zip by Serge Osnach, Nov. 22, 2006, adds contexts using the distance to an anchor byte (x00, space, newline, xff) combined with previous characters. The -8 test caused some minor disk thrashing at 2 GB memory under WinXP Home (82% CPU usage). Time reported is wall time.

paq8jc.zip by Serge Osnach, Nov. 28, 2006, improves the record model for better compression of some binary files, although it is slightly worse for text. Time for -8 is wall time at 72% CPU usage.

paq8jd by Bill Pettis, Dec. 30, 2006, improves on paq8j with additional SSE (APM) stages. enwik8 -8 caused some disk thrashing at 2 GB memory.

paq8k is by Bill Pettis, Feb. 13, 2007.

paq8l by Matt Mahoney, Mar. 8, 2007, is based on paq8jd. It adds a DMC model and minor improvements.

paq8fthis2 by Jan Ondrus, Aug. 12, 2007, is paq8f with an improved model for compressing JPEG images. It is otherwise archive compatible with paq8f for data without JPEG images (such as enwik8 and enwik9).

paq8n by Matt Mahoney, Aug. 18, 2007, combines paq8l with the JPEG model from paq8fthis2.

paq8o and paq8osse by Andreas Morphis, Aug 22 2007, is paq8n with an improved model for .bmp images. There are two executables that produce identical archives. paq8o.exe is for Pentium MMX or higher. paq8osse.exe is for newer processors that support SSE2 instructions like the Pentium 4. It is about 8% faster, but uses more memory. Both use the same C++ source but use different (but equivalent) assembler code to implement the neural network mixer. paq8osse.exe was compiled with Intel C++, which produces slightly faster executables than g++ used in earlier versions. The current version is paq8o ver. 2 (Aug. 24, 2007), which fixes the file name extension (was .paq8n) but does not change compression. The benchmark is based on the first version.

paq8o3 by KZ, Sept. 11, 2007, combines paq8o with an improved JPEG model from paq8fthis3 (Jan Ondrus, Sept. 8, 2007) and an improved model for grayscale PGM images from paq8i (Pavel Holoborodko, Aug. 18, 2006). Text compression is unchanged from paq8l, paq8m, paq8o, or paq8o2.

paq8o4 v1 by KZ, Sept. 15, 2007, includes a grayscale .bmp model (based on the grayscale PGM model). Text compression is unaffected. It was compiled with Intel C++. paq8o4 v2 by Matt Mahoney, Sept. 17, 2007, is a port to g++ which allows wildcards, directory traversal, and directory creation, but is 8% slower. It is archive compatible with v1.

paq8o6 by KZ, Sept. 28, 2007, is based on paq8o5 by KZ, Sept. 21, 2007 with the improved JPEG model from paq8fthis4 by Jan Ondrus, Sept. 27, 2007. paq8o5 is paq8o4 with an improved StateMap from lpaq1. The improved compression of enwik8 comes from this StateMap. Compression of enwik8 is unchanged from paq8o5 to paq8o6.

paq8o7 by KZ, Oct. 16, 2007, improves paq8o6 with improved JPEG compression and support for 4 and 8 bit BMP images. Text is not affected.

paq8o8 by KZ, Oct. 23, 2007, improves paq8o7 with improved JPEG compression further.

paq8o8-jun7 is a DOS port of paq8o8 by Rugxulo, June 7, 2008.

paq8o10t is by KZ, June 11, 2008. Discussion.

paq8p3 is by KZ, Apr. 19, 2009.

paq8p3 v2 is by KZ, Apr. 21, 2009.

paq8px_v60_turbo (source code and discussion) was by Jan Ondrus (with contributions from many others), June 20, 2009, and speed optimized by LovePimple on July 11, 2009. By default the turbo version runs in high priority under Windows, but was tested at normal priority. The v60 version was released after a long period of development beginning with v1 on Apr. 25, 2009. Development was aimed mostly at improving x86, image and wav compression. Decompression was not verified.

paq8px_v69 was released Apr. 26, 2010.

paq8pxd by kaitz, Jan. 21, 2012, modifies paq8px_v69 by adding dynamic dictionary preprocessing (based on XWRT), UTF-8 detection, and an alternating byte sparse model.

paq8pxd_v2 by kaitz (KZo) was released Feb. 11, 2012.

paq8pxd_v3 by kaitz (KZo) was released Feb. 23, 2012. Modified im8model, base64 in email model, and fixes false image detection in enwik9.

paq8pxd_v4 by kaitz was released Apr. 19, 2012. Adds 4 bit bmp model, base64 fixes, combines WRT source code and has other fixes.

paq8pxd_v5 by kaitz was released Apr. 18, 2013.

paq8pxd_v7 by kaitz was released Aug. 14, 2013.

paq8pxd_v8 by kaitz was a temporary release on June 16, 2014. It was still under development to fix bugs causing it to fail on JPEG and WAV input, but there were no errors for enwik8 or enwik9. To test, it was compiled from source under 64 bit Ubuntu using g++ 4.8.1 -O3.

paq8pxd_v10fix by kaitz was released June 21, 2014. It was compiled from source under 64 bit Ubuntu, g++ 4.8.1 -O3.

paq8pxd_v12 by kaitz was released July 28, 2014. It was compiled from source under 64 bit Ubuntu, g++ 4.8.1 -O3.

paq8pxd_v12-skbuild, Aug. 9, 2014, is a 64 bit port of paq8pxd_v12 by Skymmer with work by AlexDoro adding options -9 and -10, each of which doubles memory usage from the previous level.

paq8pxd_v13_x64 is the 64 bit compile by Skymmer of paq8pxd_v13fix3 by kaitz on Aug. 26, 2014. It supports levels up to 15 using 25955 MB memory.

paq8pxd_v15 was released Sept. 17, 2014. It has options -s1…-s15 and -f1…-f15 which mean slow or fast respectively. Higher levels use more memory. Faster methods use fewer models. Levels 9 and higher require a 64 bit compile. To test, the program was compiled with g++ 4.8.2 for 64 bit Ubuntu with option -O3.

paq8pxd_v12_biondivers1_x64 is a 64 bit build of v12 by Luca Biondi, Oct. 27, 2014.

paq8pxd_v18 by kaitz was released July 18, 2016. Options -{qfs} select quick, fast, slow, followed by a number selecting memory usage.

paq8px_v77 was released July 10, 2017.

paq8px_v32 and pax8pxd_v96 with DRT and split preprocessing of enwik9 were released Aug. 29, 2017.

paq8pxd_v47 was released Mar. 18, 2018.

paq8pxd_v48_bwt1 was released Aug. 9, 2018.

paq8pxd_v61 was released Feb. 23, 2019. Resplit package.

Options select memory usage as shown in the table. Early versions took no options. Most versions were not tested on enwik9 due to their slow speed.

.1277 durilca

durilca and durilca’light 0.5 by Dmitry Shkarin (Apr. 1, 2006) are closed source, experimental command line file compressors based on ppmd/ppmonstr with filters for text, exe, and data with fixed length records (wav, bmp, etc). durilca’light is a faster version with less compression. Unfortunately both crash on enwik9. Decompression is verified on enwik8.

The -m700 option selects 700 MB of memory. (It appears to use substantially more for enwik9 according to Windows task manager). -o12 selects PPM order 12 (optimal for enwik9 -t0). -t0 (default) turns off text modeling, which hurts compression but is necessary to compress enwik9 (although decompression still crashes). -t2(3) turns on text preprocessing (dictionary; thus the increased decompresser size). -t2 also supports 3 additive flags (4, 8, 16) which have no effect on this data, thus -t2(31) or -t2 (default is 31) give the same compression as -t(3).

durilca 0.5(Hutter) was released 1457Z Aug. 16, 2006. It does not use external dictionaries. When run with 1 GB memory (-m700), -o13 is optimal. With 2 GB (-m1650), -o21 is optimal. The unzipped .exe file is 86,016 bytes.

durilca4linux_1 (0825Z Aug 23 2006) is a Linux version of durilca 0.5(Hutter) which successfully compresses enwik9 and decompresses with UnDur (23,375 bytes zipped, 42,065 bytes uncompressed). All versions of durilca require memory specified by -m plus memory to read the input file into memory. In Windows, this exceeds the 2 GB process limit regardless of available RAM and swap. Thus, enwik9 compresses only under Linux with 2 GB real memory and 1 GB additional swap. The -o12 option is optimal for enwik9 (tested under 64 bit SuSE 10.0 by the author), -o24 for enwik8 (verified by me under 64 bit Ubuntu 2.6.15).

durilca4linux_2 (Oct. 16, 2006) is a closed source Linux version specialized for this benchmark. It includes a warning that use on other files may cause data loss. It requires AMD64 Linux and 3 GB of memory (2 GB for enwik8). The decompresser files (EnWiki.dur and UnDur) are contained within a 241,322 byte zip file in the rar distribution. To compress:

To decompress:

The first step extracts a compressed dictionary. It is organized in a similar manner to paq8hp2-paq8hp5 in that syntactically related words and words with the same suffix are grouped together. Results are reported by the author under Suse Linux 10.0. I verified enwik8 only (6480 ns/b to compress on a 2.2 GHz Athlon 64 with 2 GB memory under Ubuntu Linux). enwik9 caused disk thrashing.

durilca4linux_3 (dictionary version v1) was released Feb. 21, 2008. Like version 2, it requires extraction of EnWiki.dur before compressing or decompressing, and may not work with files other than enwik8 and enwik9. As tested, requires 64-bit Linux, 4 GB RAM, and 5 GB RAM+swap.

undur3 v2 contains an improved dictionary (version v2), released Apr. 22, 2008, for DURILCA4Linux_3. The compression and decompression programs are the same. The decompression program UnDur (Linux executable) is included. To compress, download durilca4linux_3 and replace the dictionary (EnWiki.dur) with this one. The options are -m3600 (3600 MB memory), -o14 (order 14 PPM), -t2 (text model 2).

undur3 v3, released May 22, 2008, uses an improved dictionary but the same compressor and decompresser as v1 and v2. The dictionary contains 123,995 lowercase words separated by NUL bytes. Of these, 5579 words occur more than once (wasted space?) I tested options -m1500 under Ubuntu Linix with 2 GB memory. At -m1500 top reports 2157 MB virtual memory and 1894 MB real memory. -m1600 caused disk thrashing.

durilca kingsize (July 21, 2009) runs under 64 bit Windows and requires 13 GB memory. It is designed to work only on this benchmark and not in general. The dictionary file EnWiki.fsd must be extracted first from EnWiki.dur before compression or decompression. Requires msvcr90.dll. enwik8 can be compressed with -m1200 (1.2 GB).

durilca4_decoder is a new dictionary for durilca’kingsize (above), Nov. 12, 2009. It is reported as “durilca’kingsize_4” below. Decompression time is reported to be 1411.88 sec with “durilca d” and 1796.98 sec with “UnDur”. enwik8 compresses with 1200 MB (-m1200) in 157.38 sec.

.1301 cmve

cmv 00.01.00 is a free, closed source, experimental file compressor for 32 bit Windows by Mauro Vezzosi, Sept. 6, 2015. It uses context mixing. Option “2,3,+” selects max compression (2), max memory (3), and a large set of models (+). A hex bitmap for this argument turns individual models on or off. Note 48 timings are for enwik8 only.

cmv 00.01.01 was released Jan. 10, 2016. It is compatible with 00.01.00 and does not change the compression ratio.

cmve 0.2.0 was released Nov. 28, 2017.

.1323 paq8hp12any

paq8hp12any was developed as a fork of the PAQ series of open source context mixing compressors by Alexander Rhatushnyak. It was forked from the paq8 series developed largely by Matt Mahoney, and uses a dictionary preprocessor (xml-wrt) originally developed by Przemyslaw Skibinski as a separate program and later integrated. All versions are optimized for the Hutter prize. Thus, they are tuned for enwik8. The 12 versions are described below in chronological order. They originally were located here (link broken) and can now be found here (as a zpaq archive) (as of Sept. 16, 2009). All programs are free, GPL open source, command line archivers. Most take a single option controlling memory usage.

Note: these programs are compressed with upack, which compresses better than upx. Some virus detectors give false alarms on all upack-compressed executables. The programs are not infected.

paq8hp1 by Alexander Rhatushnyak, 1945Z Aug. 21, 2006. It is a modification of paq8h using a custom dictionary tuned to enwik8 for the Hutter prize. Because the Hutter prize requires no external dictionaries, the dictionary is spliced into the .exe file during the build process. When run, it creates the dictionary as a temporary file. The program must be run in the current directory (not in your PATH or with an explicit path), or else it can’t find this file. The unzipped paq8hp1.exe is 206,764 bytes. Decompression was verified for enwik8 (60730 ns/b for -8, 60660 ns/b for -7). enwik9 is pending.

paq8hp2 (source code) by Alexander Rhatushnyak, 0233Z Aug. 28, 2006 is an improved version of paq8hp1 submitted for the Hutter prize. paq8hp2.exe size is 205,276 bytes. It differs from paq8hp1 mainly in that the 43K word dictionary for 2-3 byte codes is sorted alphabetically. The 80 most frequent words, coded as 1 byte before compression, are grouped by syntactic type (pronoun, preposition, etc).

paq8hp3 (source code) by Alexander Rhatushnyak, released Aug. 29, 2006 is an improved version of paq8hp2 submitted for the Hutter prize on Sept. 3, 2006. The 80 dictionary words coded with 1 byte and 2560 words coded with 2 bytes are organized into semantically related groups or by common suffixes. The 40,960 words with 3 byte codes are sorted from the last character in reverse alphabetical order. paq8hp3.exe is 178,468 bytes unzipped. enwik9 decompression is not yet verified. For enwik8, decompression is verified with time 60300 ns/b compression, 60220 ns/b decompression.

paq8hp4 (source code) by Alexander Rhatushnyak, released and submitted for the Hutter prize on Sept. 10, 2006, is an improved version of paq8hp3. The dictionary is further organized into semantically related groups among 3-byte codes. The unzipped size of paq8hp4.exe is 206,336 bytes.

paq8hp5 (source code) by Alexander Rhatushnyak, released Sept. 20, 2006, is an improved version of paq8hp4, submitted for the Hutter prize on Sept. 25, 2006. The unzipped size of paq8hp5.exe is 174,616 bytes (in spite of a slightly larger dictionary). The dictionary size is optimized for enwik8; a larger dictionary would improve compression of enwik9. Decompression is verified for enwik8 only (-8 at 74640 ns/b). A Linux port of paq8hp5 is by Лъчезар Илиев Георгиев (Luchezar Georgiev), Oct 26, 2006 (mirror).

paq8hp6 (source code) by Alexander Rhatushnyak, released Oct. 29, 2006, is an improved version of paq8hp5. It was submitted as a Hutter prize candidate on Nov. 6, 2006. Unzipped paq8hp6.exe size is 170,400 bytes. The -8 option was not tested on enwik9 due to disk thrashing on my 2 GB PC. Compression was about 25% finished after 9 hours.

paq8hp7a by Alexander Rhatushnyak, Dec. 7, 2006, was intended to supercede paq8hp6 as a Hutter prize entry, then was withdrawn on Dec. 10, 2006 with the release of paq8hp7. Unzipped executable size is 151,664 bytes. -8 for enwik9 (but not enwik8) caused disk thrashing on my computer (2 GB, WinXP).

paq8hp7 (source code) by Alexander Rhatushnyak, Dec. 10, 2006, as a Hutter prize entry. Unzipped paq8hp7.exe size is 152,556 bytes.

paq8hp8 (source code) by Alexander Rasushnyak, Jan. 18, 2007, as a Hutter prize entry (replacing an incorrect version posted 2 days earlier). Unzipped size is 152,692 bytes. The dictionary is identical to paq8hp7.

paq8hp9 (mirror) (source code) by Alexander Rhatushnyak, Feb. 20, 2007, is a Hutter prize entry. Only the -7 option works. The unzipped size of paq8hp9.exe is 112,628 bytes.

paq8hp9any (Feb. 23, 2007) by Alexander Rhatushnyak is a paq8hp9 -7 compatible version with external dictionary where all options work. However the zipped program is larger and -8 was not tested due to disk thrashing, so results are unchanged.

paq8hp10 (Mar. 26, 2007) by Alexander Rhatushnyak was derived from paq8hp9 as a Hutter prize entry. The unzipped size is 103,224 bytes. Only the -7 option works.

paq8hp10any (source code), Mar. 31, 2007, by Alexander Rhatushnyak is archive compatible with paq8hp10 -7 but works with other memory options. When run, paq8hp10.exe and both dictionary files should be in the current directory. This program is not a Hutter prize entry.

paq8hp11 (mirror) by Alexander Rhatushnyak, Apr. 30, 2007, is a Hutter prize entry. paq8hp11.exe is 99,816 bytes. Like paq8hp10, it works only with the -7 option.

paq8hp11any (source code) by Alexander Rhatushnyak, May 2, 2007, is a paq8hp11 variant that accepts any memory option. It was optimized for speed rather than size. It includes two dictionary files which must be present in the current directory when run, unlike paq8hp11 where the dictionary is self extracted. -8 selects 1850 MB memory. -7 produces the same archive as paq8hp11. Run speeds for -8 enwik8 are 76770+76820 ns/B.

paq8hp12 (mirror) by Alexander Rhatushnyak, May 14, 2007, is a Hutter prize entry. paq8hp12.exe size is 99,696 bytes. It works only with the -7 option like paq8hp11.

paq8hp12any (source code) by Alexander Rhatushnyak, May 20, 2007, is a paq8hp12 variant that accepts any memory option (like paq8hp11any). The -7 option produces an archive identical to that of paq8hp12.

paq8hp12any was updated on Jan. 9, 2009 to fix a compiler issue and add a 64 bit Linux version. Compressed file format was not changed. It was not retested.

Options select memory usage as shown in the table.

paq8hp1 through paq8hp12 can be used as a preprocessor to other compressors by compressing with option -0. In the following tests on ppmonstr, options were tuned for the best possible compression of enwik8 with 2 GB memory (1.65 GB available under WinXP). The xml-wrt 2.0 options are -l0 -w -s -c -b255 -m100 -e2300 (level 0, turn off word containers, turn off space modeling, turn off containers, 255 MB buffer for dictionary, 100 MB buffer, 2300 word dictionary). The xml-wrt 3.0 options are -l0 -b255 -m255 -3 -s -e7000 (-3 = optimize for PPM).

xml-wrt prepends the dictionary to its output. To make the comparison fair, the compressed size of the dictionary must be added. This is done in two ways, first by compressing the preprocessed text and dictionary and adding the compressed sizes, and second by prepending the dictionary to the preprocessed text before compression. The first method compresses about 1-2 KB smaller.

The uncompressed size of each dictionary for paq8hp1 through paq8hp4 is 398,210 bytes. They contain identical words, but in different order. The first two dictionaries are identical. They compress smaller because they are sorted alphabetically. The dictionary for paq8hp5 is 411,681 bytes. It contains all of the words in the first 4 dictionaries plus 1280 new words (44,880 total).

The transform done by paq8hp1 through paq8hp5 is based on WRT by Przemyslaw Skibinski, which first appeared in PAsQDa and paqar, and later in paq8g and xml-wrt. The steps are as follows:

• The input is parsed into seqences of all uppercase letters or all lowercase letters, or one uppercase letter followed by lowercase letters, e.g. “THE”, “the”, or “The”.
• All uppercase words are prefixed by a special symbol (0E hex in paq8hp3, paq8hp4, paq8hp5). If a lowercase letter follows with no intervening characters (e.g. “THEre”, then a special symbol (0C hex) marks the end. (e.g. 0E “the” 0C “re”).
• Capitalized words are prefixed with 7F hex (paq8hp3) or 40 hex (paq8hp4, paq8hp5) (e.g. “The” -> 40 “the”).
• All letters are converted to lower case.
• Words are looked up in the dictionary. The first 80 words in the dictionary are coded with 1 byte: 80, 81, … CF (hex).
• The next 2560 words (paq8hp1-4) or 3840 words (paq8hp5) are coded with 2 bytes: D080, D081, … EFCF (paq8hp1-4), or D080, … FFCF (paq8hp5).
• The last 40960 words are coded with 3 bytes: F0D080, F0D081, … FFEFCF.
• If a word does not match, then the longest matching prefix with length at least 6 is coded and the rest of the word is spelled.
• If there is no matching prefix, then the longest matching suffix with length at least 6 is coded after spelling the preceding letters.
• If no matching word, prefix, or suffix is found, the word is spelled. Capitalization coding occurs regardless.
• Any input bytes with special meaning are escaped by prefixing with 06: 06, 0C, 0E, 40 or 7F, 80-FF.

WRT has additional capabilities depending on input, such as skipping encoding if little or no text is detected. The dictionary format is one word per line (linefeed only) with a 13 line header..1355 emma

emma v0.1.3 is a free, closed source file compressor for 32 bit Windows by mpais, Mar. 8, 2016. It uses context mixing. It has a GUI-only interface to select compression options. For testing, all settings were for maximum compression as follows: Memory usage 512 Mb, maximum order 9, ring buffer 32 Mb, probability refinement level 3, mixing complexity insane, adaptive learning rate on, fast mode on long matches off, ludicrous complexity mode on, match model on, 32 Mb, high complexity; text model on, 128 Mb, high; sparse model on, 16 Mb, high; sparse model on, 16 Mb, high; indirect model on, 16 Mb, high; x86/64 model on, 64 Mb, insane; image models on, 80 Mb, high; audio models on, 32 Mb, high; record model on, 16 Mb, high; distance model on, 8 Mb; JPEG model on, 40 Mb, high; GIF model on, 32 Mb, high; executable code (x86/64) transform on; process conditional jumps on; colorspace (RGB) on; delta coding on; dictionaries: English on, Spanish off, Italian off, French off, Portugese off.

emma v0.1.4 was released Mar. 13, 2016. For testing, the text model was increased to 256 MB. A DMC model (8 MB) was added. The non-text related models were turned off: x86, image, audio, JPEG, GIF. All transforms (x86, RGB, delta) were turned off.

emma 0.1.6 ( discussion) was released Mar. 27, 2016. It was tested by splitting enwik9 into parts using hsplit to move the highly compressible middle part to the end. Then the reordered file was then processed using drt dictionary processing (see lpaq9m instead of emma’s built in dictionary and then compressed with emma with maximum compression and memory options (like below) except that dictionary processing was turned off. The decompressor size includes drt.exe, lpqdict0.dic, hsplit.exe and a BAT file to restore the original order, all compressed with emma, then those files plus emma.exe (without dictionaries) compressed into a zip archive. Specifically, enwik9 was prepared:

before compression with emma, then restored after decompression:

The command hsplit input output N means produce output.1, output.2, etc. each of size N bytes.

emma 0.1.12 was released July 10, 2016. There are 32 and 64 bit versions. The 64 bit version can use more memory. Settings were as follows:

emma 0.1.22 was released Feb. 12, 2017. Settings: all settings = MAX, eceept: image and audio models = off, use fast mode on long matches = off, xml=on, x86model=off, x86 exe code = off, delta coding = off, dictionary = off, ppmd memory = 1024, ppmd order = 14

emma 1.23 was released Aug. 29, 2017. It uses ppmd_mod v3a by Shelwein and is preprocessed with DRT. EMMA 1.23 settings: all settings = MAX, eceept: image and audio models = off, use fast mode on long matches = off, xml=on, x86model=off, x86 exe code = off, delta coding = off, dictionary = off, ppmd memory = 1024, ppmd order = 14

.1422 zpaqzpaq 1.03 is a free, open source command line archiver by Matt Mahoney, Sept. 8, 2009. zpaq implements the proposed ZPAQ standard format for highly compressed data. The goal of the standard is to allow the development of new compression algorithms without breaking compatibility with older decompressers. ZPAQ is described by the level 1 specification and a reference decoder. The specification does not describe the encoding algorithm. It only requires that compressed files be readable by the reference decoder, which was first released with the standard on Mar. 12, 2009 (v1.00). The release followed a development period with 9 experimental and incompatible version (level 0, v0.01 through v0.09) released beginning Feb. 15, 2009. All level 1 versions from v1.00 onward are forward and backward compatible with each other. Higher levels may be introduced in the future with only a forward compatibility requirement: higher level decompressers must read archives produced by lower level compressors, back to level 1.

A ZPAQ archive is organized into independently compressed blocks. Each block is divided into one or more segments which must be decompressed in sequence. Each segment represents a file or a part of a file. The standard supports both archivers and single file compressors. In the case of a compressor, no filenames are stored in the segment headers, and all the blocks and segments are concatenated to a single output file specified by the user.

ZPAQ uses a streaming format that can be read or written in a single pass. The arithmetic coded data is designed so that the end of a segment can be found by scanning quickly without decoding. There is no central directory information to update when blocks are added, removed, or reordered.

The ZPAQ standard requires that the decompression algorithm be described in the block headers. The header describes a collection of bitwise predictive models based loosely on PAQ components, a program to compute the bytewise contexts for each model, and a second program to perform arbitrary postprocessing on the output data. The two programs are written in an interpreted bytecode language called ZPAQL.

A ZPAQ model specifies a list of 1 to 255 components. Each component outputs a prediction or probability that the next bit will be a 1. Each component may receive as input a computed 32-bit context and the output predictions of earlier components on the list. The last component’s prediction is fed to an arithmetic coder to encode or decode the next bit. The components are as follows:

• CONST - specifies a fixed, constant prediction.
• CM - context model. The context is mapped to a prediction by a table with a user specified size. Each table entry also has a count. The table is updated by adjusting the prediction to reduce the prediction error in proportion to 1/count. The count is incremented up to a user specified limit in the range 4 to 1020.
• ICM - indirect context model. The context is mapped to a bit history (an 8 bit state) by a hash table of user specified size. The history is mapped to a prediction by a CM with a fixed, high count limit. The history represents a count of recent 0 and 1 bits and also indicates whether the last bit was a 0 or 1.
• MATCH - has an output buffer and pointer table, both of user specified size. The context is mapped to a pointer into the buffer where the same context was last observed. The corresponding bit after the last match is predicted in proportion to the length of the match.
• AVG - Two predictions are combined by weighted averaging. The user specifies the weight. Weighted mixing is always in the logistic or “stretched” domain: stretch(p) = log(p/(1-p)).
• MIX2 - Two stretched predictions are combined by weighted averaging from a table of weights of a user specified size and selected by a context. After prediction, the selected weight is updated to favor the more accurate input prediction. The user specifies the adaptation rate.
• MIX - Like a MIX2 but over a user specified array of earlier predictions and one weight per input per context.
• SSE - secondary symbol estimation. A context and a stretched input prediction select an output prediction from two adjacent entries in a 2-D table by interpolation. The table is updated to reduce the prediction error of the nearer of the two entries as with a CM. The user specifies the table size in the context dimension (the probability dimension is fixed at 64), and the initial and maximum counts to determine adaptation rate.
• ISSE - indirect SSE. Receives a context and an earlier prediction. The context is mapped to a bit history as with an ICM. The history is mapped to the context of a MIX2 with one prediction from input and the other CONST. It has the effect of adjusting the input prediction based on the bit history of the current context.

There are two ZPAQL virtual machines, one (HCOMP) to compute contexts, and one (PCOMP) to postprocess the decoded data. Each program is called once per decoded byte with that byte as input. A ZPAQL machine has the following state:

• An array H of 32 bit unsigned values of user specified size. In HCOMP, the elements at the beginning of the array are each assigned to a component to hold its context.
• An array M of 8 bit unsigned values of user specified size.
• 32 bit registers A, B, C, and D. A is the accumulator, the destination of most arithmetic and logical operations. It also contains the input byte when the program is executed. B and C can point into M. D can point into H.
• 256 registers, R0 through R255, holding 32 bit values.
• A flag register F holding the result of the last comparison (true or false).
• A 16 bit program counter.

Most instructions are either 1 byte or 2 bytes with an 8 bit operand (0..255). There is one 3 byte instruction (16 bit jump). The possible instructions are assignment, swap, add, subtract, multiply, divide, mod, and, or, xor, not-and, left shift, right shift, less than, equals, greater than, increment, decrement, complement, jump, conditional jump, hash, output, and halt. The hash instruction is convenient for updating a context hash with an input byte by the formula hash := (hash + byte + 512) * 773.

zpaq 1.03 takes as input a configuration file which describes the arrangement of components, their parameters, and the ZPAQL program HCOMP written one token per byte in a C-like syntax (e.g. “A=B” to assign B to A). PCOMP is not specified because in general the preprocessing step by the compressor is different (and usually more complex) than the postprocessing step. Instead, zpaq 1.03 provides the option of two built-in preprocessors, LZP and E8E9. If selected, the preprocessing is done in C++ by the compressor, and the compressor generates ZPAQL code to perform the inverse transform and insert it into the archive block header. (PCOMP is actually appended to the beginning of the input data and compressed with it. HCOMP is not compressed).

E8E9 is used to improve compression of 32 bit x86 executable files. It replaces the 32 bit relative address after a CALL or JMP (0xE8 or 0xE9) x86 instruction by adding the offset from the beginning of the file. This improves compression because often there are several calls to the same target. PCOMP performs the inverse transform in ZPAQL by subtracting the offset.

LZP encodes long string matches as an escape byte and length byte. The decompresser maintains a rolling context hash which indexes a pointer table (the H array) into the output buffer (the M array) pointing to the previous context match. If an escape is present, then the indicated number of bytes are copied from the previous context match. In zpaq 1.03, the user can specify the sizes of M and H, the hash multiplier (effectively choosing the context length), the value to use as the escape byte (preferably occurring rarely in the input), and minimum match length. Escape bytes in the input are encoded as an escaped 0 length.

zpaq 1.03 is distributed with three configuration files, min.cfg (for speed), mid.cfg (the default), and max.cfg (for good compression). However, the user can also write their own config files.

o0.cfg, o1.cfg, and o2.cfg are order 0, 1, and 2 models with a single CM and direct context lookup with no hashing. o0 is equivalent to fpaq0. In each of the models the asymptotic learning rate was tuned for maximum compression. Other values are given as comments in the sources. The CM uses 2KB, 512KB and 128MB respectively.

min.cfg uses LZP preprocessing with a minimum match length of 3 and an order 4 context hash, followed by compression by single CM with an order 3 context and 512K entries. The LZP has a 1 MB output buffer and 256K index. It uses 4 MB memory.

mid.cfg (the default) does no preprocessing. It has an order 0 ICM, a chain of ISSE with context orders 1 through 5, each taking the previous ISSE as input, a MATCH with an order 7 context, and a final MIX with an order 1 context taking input from all other models. It uses 111 MB memory.

max.cfg does no preprocessing. It has 21 components: an order 0 ICM, a chain of order 1, 2, 3, 4, 5, 7 ISSE, an order 8 MATCH, a wordwise order 0-1 ICM-ISSE chain (for text), sparse order 1 ICM with gaps of 1, 2, and 3, a partially masked order 2 ICM with a gap of 216 for CCITT images (calgary/pic), order 0 and 1 mixers taking a CONST and all previous components as input and averaged together with a context free MIX2, followed by a chain of order 0 and 1 SSE each partially bypassed by a context free and order 0 MIX2, and a final context free MIX of all other components. The two wordwise contexts depend on the current and previous case insensitive sequences of letters in the range a-z. It uses 278 MB memory.

max3.cfg is a variation of max.cfg by Jan Ondrus (Sept. 10, 2009) using 550 MB memory and without a CCITT model.

max4.cfg is a variation of max3.cfg (Sept. 15, 2009) using 1465 MB memory.

drt is the dictionary preprocessor from lpaq9m by Alexander Rasushnyak. The results include the dictionary file lpqdict0.dic compressed from 465,210 to 88,759 bytes in 8 seconds as a separate archive with max4.cfg and decompressed in 7 seconds, and drt.exe with a size of 15,548 bytes (whether uncompressed or as a zip file) with 38 seconds to encode enwik9 and 38 seconds to decode.

max_enwik9.cfg is a variation of max.cfg by Mike Russell, Sept. 11, 2009. It adds 5 more models for higher order contexts using an ISSE chain after the first order 5 mixer.

max_enwik9drt.cfg is a variation of max_enwik9.cfg, Sept. 18, 2009, modified to define word contexts for ASCII range 65-255 instead of A-Z,a-z because DRT encodes words using bytes in the range 128-255. The compressed size of lpqdict0.dic is 86810 bytes, 12+9 sec, compressed separately and added to the compressed sizes.

zpipe 1.00 is a ZPAQ compatible streaming file compressor that compresses or decompresses from standard input to standard output. It takes no options. It compresses equivalently to mid.cfg without storing a filename or comment. The decompresser outputs the contents of archives to a single file by concatenation.

bwt_j2.cfg implements an inverse BWT transform. It was writen by Jan Ondrus, Oct. 6, 2009. The forward transform is implemented by an external preprocessor, bwtpre (included above) by Matt Mahoney, Oct. 6, 2009. bwtpre is based on BBB fast mode compression but does not itself compress. The argument “,18” tells bwt_j2.cfg to use a block size of 210+18-256 bytes. Memory usage is 5x blocksize for both the preprocessor and postprocessor, plus 100 MB for the model. The ability of config files to call external preprocessors was added to zpaq v1.05 on Sept. 28, 2009. The ability to pass arguments was added to zpaq v1.07 on Oct. 2, 2009.

zpaq v1.08 (Oct. 14, 2009) adds the capability to compile ZPAQL configuration files and corresponding archive headers to C++ and link to a copy of itself to speed up compression and decompression. The program first looks for an optimized version of the program, writes and compiles it if needed, then runs it to compress or decompress. Some tests are shown for speed comparison. max.cfg was modified to use less memory. The arguments to min.cfg, mid.cfg, and max.cfg have the effect of improving compression at the cost of doubling memory for each increment.

bwt_slowmode1_1GB_block.cfg implements slow mode BWT transform using 1.25x blocksize memory based on BBB. The inverse transform was re-implemented in ZPAQL by Jan Ondrus, Oct. 15, 2009.

zpaq v1.09 is mainly a Linux port of v1.08 with some cosmetic improvements. Times for obwt_j2.cfg,18 are shown for comparison to v1.07 without optimization. Memory usage is 1838 MB for compression (includes preprocessor) and 1443 MB for decompression.

The c command followed by the name of a configuration file creates a new archive using that file. By default the archive header includes the file name (6 bytes), size (10 bytes), and SHA1 checksum (20 bytes). There are options to omit these and save 36 bytes. The “oc” command in zpaq v1.08 optimizes for speed.

zp 1.00 is a ZPAQ compatible archiver by Matt Mahoney, May 7, 2010. It is designed to have fewer options so it is easier to use. It has 3 compression levels: 1=fast, 2=mid, 3=max. It uses compiled ZPAQL code (like zpaq oc/ox) but without requiring an external C++ compiler to be installed. It automatically detects when an archive is compressed with one of these three models and decompresses with compiled code. Otherwise, it will decompress all other ZPAQ compatible archives with slower, interpreted code. Levels 2 and 3 are the same as zpaq mid.cfg and max.cfg. Only level 1 (fast) was tested because it uses a new model, fast.cfg, an ICM chain of length 2 with order 2 and 4 contexts. It is equivalent to compressing with zpaq ocfast.cfg.

pzpaq 0.01 (a predecessor to zp 1.02) is a free, open source file compressor and archiver by Matt Mahoney, Jan. 21, 2011. It uses a ZPAQ compatible format with speed optimizations for the 3 default compression levels supported by libzpaq, zpaq, and zpipe. It supports parallel compression and decompression by dividing the input into blocks which are compressed or decompressed at the same time in separate threads, writing the result to temporary files, and then comcatenating them when done. For compression with N threads, the input is divided into N blocks of equal size by default, although a different block size can be specified. Larger blocks make compression better but reduce the number of threads that can run at the same time. Using more threads also increases the memory required. pzpaq can also compress or decompress multiple files at once to separate archives or pack them into a solid archive or an archive with the packed files split across blocks within the archive.

The version 0.01 distribution includes a 32 bit Windows executable and source code to compile for Windows or Linux. For Windows, the code must be linked with Pthreads-Win32 and pthreadGC2.dll is required at run time. The program size was calculated from the source code (including libzpaq) required for Linux, which has pthreads installed by default and is not included in the size.

The test results shown below are for 2 machines, a 2.67 GHz Intel Core i7 M620 with 2 cores and 2 hyperthreads per core, running 64 bit Linux (note 48), and a 2.0 GHz Intel T3200 with 2 cores without hyperthreading running 32 bit Windows (note 26). The Linux version was compiled with g++ 4.4.4 -O3 -s -march=native -DNDEBUG. The Windows version used the distributed pzpaq.exe and pthreadGC2.dll. It was compiled with g++ 4.5.0 -O2 -s -march=pentiumpro -fomit-frame-pointer. Times shown are wall (real) times, not process times, in nanoseconds per byte.

We observe the normal 3 way tradeoff between speed, memory, and compression. Compression levels -1, -2, and -3 require 38 MB, 112 MB, and 247 MB per thread respectively. The default is -2. -t selects the number of threads. The default is -t2. -b selects the block size. The default is the input size divided by the number of threads. The -m option limits memory usage in MB by reducing -t. The default is -m500. Selecting larger -m than required has no effect on compression, speed, or actual memory used. -m is only required with -3 -t3 or higher.

zp 1.02 is a successor to pzpaq, which was considered experimental. It adds two new BWT compression modes which replace the “fast” (-1) model. Option -m1 selects the faster BWT mode (bwtrle1), which consists of right-context sorting (using libdivsufsoft by Yuta Mori), RLE encoding, and a single order 0 ICM with the RLE state (literal or count) as context. The BWT output is run length encoded by replacing runs of 2 to 257 identical bytes with 2 bytes and a count. The ICM maps the context to a bit history and then to a bit prediction, which is adjusted after coding to reduce the prediction error.

Option -m2 selects the better BWT mode (bwt2), which drops the RLE step and uses an order 0-1 ISSE chain. The order-1 ISSE adjusts the order-0 ICM prediction by mixing it in the logistic domain with a constant, such that the pair of weights is selected by an 8-bit bit history, which is selected by an order 1 context of the BWT output. After coding, the mixing weights are adjusted to reduce the prediction error.

Options -m3 and -m4 select the “mid” and “max” modes, the same as -4 and -5 respectively in pzpaq. The option -bN selects a block size of N*2^20 - 256 bytes. Memory usage per thread for the two BWT modes is 5 times the block size after rounding up to a power of 2. The default is -b32 which uses 160 MB per thread for -m1 and -m2. Memory usage for -m3 and -m4 is not affected by block size. Usage is 111 MB and 246 MB per thread for -m3 and -m4 respectively.

Other changes: there is no longer an option to limit memory. The default number of threads (-t option) is the number of cores. There is no solid mode compression because BWT requires that each block contain only one whole or part of a file. There is a separate decompresser, unzp, which is optimized for fast, mid, max, bwtrle1, and bwt2 modes, and can be configured to optimize for other models by generating, compiling, linking, and running C++ code for an optimized version of itself. Compressed sizes are based on the unzp source code (37,967 bytes).

zpaq 4.00 was released Nov. 13, 2011. It uses libzpaq v4.00, which internally translates ZPAQL into just-in-time (JIT) x86-32 or x86-64, which runs about as fast as the previous version that translated ZPAQL to C++ and compiled it. Unlike the earlier version, it correctly handles all legal ZPAQL, such as jumps into the middle of a 2 byte instruction, such as occurs in max_enwik9.cfg. Like zp 1.02, it uses multi-threading and the same build-in compression levels -m1 through -m4.

Results are shown below for a 4 GB 2.66 GHz Core I7 M620 (note 40), which has 2 cores with 2 hyperthreads each. Run under Ubuntu 64 bit Linux. Compression and decompression times (wall times, ns/byte) are shown for 1 through 4 threads (-t1 through -t4) as the compression method (-m) and block size (-b) are varied. max_enwik9 runs in one thread in a single block.

zpaq v6.12, Oct. 19, 2012, is a journaling, deduplicating, incremental archiver. These features were added in zpaq v6.00 on Sept. 26, 2012. It implements the level 2 ZPAQ standard introduced with libzpaq v5.00 on Feb. 1, 2012. The level 2 standard allows for uncompressed (but possibly pre/post-processsed) data. The format is described in the ZPAQ specification v2.01.

zpaq v6.12 is designed for large backups. It will compress 100 GB to an external drive in a few hours, then perform daily incremental backups of files whose dates have changed in a few minutes. It recursively traverses directories, storing last-modified dates and attributes of added files.

A journaling archive is append-only. When a journaling archive is updated, it keeps both the old and new versions of each file or directory. The old version can be extracted by specifying a dated version, and any later updates are ignored.

Input is deduplicated before compression by dividing input files into fragments averaging 64 KB on content-dependent boundaries that move when data is inserted or removed. The archive stores fragment SHA-1 hashes and stores any fragment with a matching hash as a pointer to an existing fragment. Any remaining fragments are packed into 16 MB blocks in memory and compressed by multiple threads in parallel to memory buffers before being appended to the archive. After compression is completed, the fragment sizes and hashes are appended, and then a list of index updates in separately compressed blocks. Each update is either a deletion (filename only) or an update (filename, date, attributes, and list of fragment pointers).

An update is performed as a transaction by first appending a temporary header, then the compressed data and index, and then finally going back and updating the header to store the compressed data size so that it can be skipped over when listing the archive contents or preparing a list of files to add or extract. If compression is interrupted or an error occurs, then the temporary header is not updated. If zpaq encounters a temporary header then it assumes that any data following it is corrupted and ignores it during extraction or listing, and overwrites it during the next update.

zpaq also has features to summarize the contents of archives containing millions of files, show update history and version dates, and compare and extract individual files and directories and rename them. Archives can be encrypted.

The deduplication algorithm uses a rolling hash of the input that depends on the last 32 bytes that are not predicted in an order-1 context. Missed predictions (from a 256 byte table) are counted as a heuristic to guess whether a block can be compressed. If not, then it is stored without compression as a speed optimization. There are 4 compression levels (-method 1 through 4). The threshold for compressing a block is 1/16, 1/32, 1/64, and 1/128 of bytes predicted by the order 1 model, respectively. Like earlier versions of zpaq, it also accepts configuration files and external preprocessors. These are always compressed.

The journaling format is not compatible with zpaq versions prior to 6.00. Older versions would decompress a journaling archive to a set of jDC* files that could in theory reconstruct the data. To support older versions, there are three additional modes: streaming, solid, and tiny. In streaming mode, each file is compressed in parallel in a separate block, and large files are split into 16 MB blocks. In solid mode, all files are compressed to a single block in a single thread. Tiny mode is like solid mode except that comments (uncompressed sizes), checksums, and header locator tags (for error recovery) are not stored, saving a few bytes each. None of these modes support journaling, incremental backup, or deduplication, and do not save file attributes or empty directories. An update appends to an archive without checking whether the files have been added before.

There are 4 built in methods. Method 1 is equivalent to “lazy” level 3. It is LZ77 using variable length codes to represent the lengths of literal byte strings or the length and offset of matches to earlier occurrences of the same string in a 16 MB output block. Matches are found by indexing a hash of the next 4 bytes in the input buffer into a table of size 4M which is grouped into 512K buckets of 8 pointers each. The longest match is coded, provided the length is at least 4, or 5 if the offset is greater than 64K and the last output was a literal. Ties are broken by favoring the smaller offset. Bucket elements are selected for replacement using the low 3 bits of the output count.

Literal lengths are coded using “marked binary” Elias gamma codes, where the leading 1 bit of the number is dropped and a 1 bit is inserted in front of the remaining bits and a 0 marks the end. For example, 1100 is coded as 1,1,1,0,1,0,0. Matches are coded as a length and an offset. The length is at least 4. All but the last 2 bits are coded as a marked binary. The number of match bits is given in the first 5 bits of the code. If the code starts with 00, then a literal length and string of literal follow. Otherwise the 5 bits code a number from 0 to 23, and that number of bits, with an implied leading 1 give the offset.

The codes are not compressed further. They are stored in the ZPAQ level 2 format, consisting of a sequence of sub-blocks each preceded by a 4 byte header giving the sub-block size.

Method 2 is also LZ77, but the codes are byte aligned and context modeled rather than coded directly. It also searches 4 order-7 context hashes and 4 order-4 hashes, rather than 8 order-4 hashes like method 1. Method 2 first codes as follows, according to the high 2 bits of the first byte:

These codes are arithmetic coded using an indirect context model. The context depends on the parse state and in the case of literals, on the previous byte. An indirect context model maps a context into a bit history (represented as an 8 bit state) and then to a bit prediction. The model is updated by adjusting the prediction to reduce the error by 0.1%. A bit history represents a bounded pair of bit counts (n0,n1) and the value of the most recent bit. The bounds for (n0,n1) and (n1,n0) are (20,0), (48,1), (15,2), (8,3), (6,4), (5,5).

Method 3 uses a Burrows-Wheeler transform (BWT) using libdivsufsort-lite v2.0. This is equivalent to -m2 in older zpaq versions. The input bytes are sorted by their right contexts and compressed using an order 0-1 ICM-ISSE chain. The order 0 ICM (indirect context model) works as in method 2, taking only the previous bits of the current byte (MSB first) as context. The prediction is adjusted by an order-1 indirect secondary symbol estimator (ISSE). An ISSE maps its context (the previous byte and the leading bits of the current byte) to a bit history, and the history selects a pair of mixing weights to compute the weighted average of the constant 1 and the ICM output in the logistic domain, log(p/(1-p)). The output is converted back to linear, and the two weights are updated to reduce the prediction error in favor of the better model. In other words, the output is:

and after the bit is arithmetic coded, the weights w1 and w2 are updated:

Method 4 is equivalent to mid.cfg or -m3 in older zpaq versions. It directly models the data using an order 0-5 ICM-ISSE chain, an order 7 match model, and an order 1 mixer which produces the bit prediction by mixing the predictions of all other components. The 6 components in the chain each mix the next lower order prediction using a hash of the next higher order context to select a bit history for that context, which selects the mixing weights. A match model has a 16 MB history buffer and a 4M hash table of the previous occurrence of the current context. If a match is found, it predicts the bit that followed the match with probability 1 - 1/(length in bits). The outputs of all 7 models are then mixed as with an ISSE except with a vector of 7 weights selected by an order 1 (16 bit) context, and with a faster weight update rate of about 0.01.

With method 4 you can give an argument like “-method 4 1” to double the memory allocated to the components to improve compression. The same extra memory is needed to decompress. The default is 111 MB per thread. An argument n multiplies memory usage by 2^n. n can be negative.

Methods 1, 2, and 3 only work in journaling and streaming mode, since they have a 16 MB block size limit. Method 4 and configuration files work in all modes.

The following tests are on a 2.0 GHz T3200 with 2 cores. zpaq will automatically detect the number of cores and use the same number of compression or decompression threads, although this can be overridden.

zpaq v6.19, Jan. 23, 2013, moves the -solid and -tiny modes into a separate program, zpaqd, and eliminates -streaming. It adds 5 more compression levels (0 through 9). -method 5 is max.cfg, a 22 component CM with some of the component sizes reduced to use about 225 MB per thread. -methods 6 through 9 each double the memory size (450 MB to 1.8 GB) and block size (32 MB to 256 MB). All levels except 0 (store uncompressed) have an E8E9 pre/post-processor. -methods 0 through 4 are unchanged.

zpaq v6.34 has 7 compression methods as follows:

• 0 = deduplicate only, store uncompressed.
• 1 = LZ77 with variable length codes in 16 MB blocks (default).
• 2 = like 1 with longer search for matches and 64 MB blocks.
• 3 = byte aligned LZ77 with context modeling of literals and parse state.
• 4 = 3 or BWT, whichever is smaller.
• 5 = 3, 4, or 8-9 component CM, whichever is smaller.
• 6 = CM with about 20 components.

Methods 0 and 1 use 16 MB blocks by default. Methods 2..6 use 64 MB blocks. The size can be specified by a second digit N which specifies 2N MB blocks. Thus, the defaults are 04, 14, 26, 36, 46, 56, 66. Larger blocks compress better but require more memory per thread.

Methods 1..6 use heuristics to detect already compressed data and either store it or compress it with a fast method like 1 depending on the degree of compressibility. The heuristic depends on the 256 byte order-1 prediction table that is used to compute the rolling hash used in the fragmentation algorithm. The table is initialized to all zeros at each fragment boundary, and contains the last byte seen in each of 256 possible 1 byte contexts. If the data is random, then at each fragment boundary (average size 64K), the following properties are expected:

• The fraction of correct predictions is 1/256.
• The number of nonzero entries in the table (if at least 4K) is 1/256.
• The frequency distribution, weighting successive occurrences of the same value by 1, 1/2, 1/3… is about 205.
• The probability of each value matching any of the previous 4 tables is 1/256.

A compressibility statistic is calculated for each test, and the highest (least random) is used. When packing fragments into blocks, if the previous fragments are detected as random and a new file is started, then the block is passed to the compressor when it is 1/8, 1/4, or 1/2 full depending on the total compressibility. Otherwise the block must be at least 3/4 full and there is not room for the next file assuming no deduplication.

In addition, the order 1 tables are used to detect text and x86 (.exe) data types. Text is detected if at least 5 letter, digit, period, or comma contexts predict a space, minus any predicted characters in the range 1..8, 11, 12, 14..31, which normally do not appear in text files. If at least 1/4 of the fragments are detected as text, then methods 5 and 6 add extra models for it. x86 is detected if at least 5 contexts predict a 139 (an x86 MOV reg, r/m instruction). If at least 1/8 of the fragments are detected as x86, then a E8E9 pre/post processor is used in methods 1..6.

LZ77 and BWT removed the 16 MB block size limitation of the previous version. Variable length LZ77 adds an extra field of rb = 1..8 bits to represent the low bits of an offset up to 32 bits, where rb increases by 1 for each doubling of the block size over 16 MB. 2rb - 1 is added to the offset, so that it requires a rb..rb+23 bit code.

Byte aligned LZ77 removed the limitation by eliminating the short code (3 bit length and 11 bit offset) and adding a code with 4 offset bytes. Lengths range from m..m+63 where m is the mininum match length, normally 8 when used with an order-1 context model.

BWT removes the block size limitation by removing the IBWT optimization of packing pointers and the byte pointed to into a single 32 bit linked list element when the block size is over 16 MB. No changes were required for higher compression levels.

zpaq versions since v6.22 support custom context models through the command line. When compressing enwik8 and enwik9 the following models are automatically generated:

The meaning is as follows.

x (experimental) rather than a digit selects a specific method which is the same for every block. It can also be s to add in streaming mode with each file in a separate block and large files split into blocks with no deduplication.

The first digit N1 after x selects a maximum block size of 2N1+20 - 4096 bytes. This is selected by the second digit of the method, if present, or else it defaults to 6 for methods 2..6 or 4 otherwise.

The second digit N2 selects the pre/post processing step. 0 means none. 1 means LZ77 with variable length codes. 2 means LZ77 with byte aligned codes. 3 means BWT. 4..7 means 0..3 with E8E9 filtering.

N3..N8 apply to the LZ77 modes only. N3 (4 or 8) is the minimum match length. N4 (8 or 0) if not 0 specifies a context order to search first. N5 (3 or 4) says to search 2N5 contexts of each order to look for matches. N6 (24..27) specifies 2N6 elements in the hash table for lookups. Each entry requires 4 bytes of memory. It defaults to the block size up to N1=26, then N1-1. N7 and N8 specify that the minimum match (N3) should be increased by 1 after a literal or match, respectively, when the match offset is greater than 2N7 or 2N8 respectively.

The sequence of strings starting with letters followed by a comma-separated list of numbers specifies various context models used by methods 3 and higher. c0 specifies an ICM (indirect context model: context to bit history to prediction). c1c256 (used in -m 6) specifies a CM (context to prediction) with an update rate of 1/count and maximum count of N1*4-4, e.g. c256 specifies 1020. The remaining arguments to c default to 0. N2 describes any special contexts. N2 in 1..255 (e.g. c0,2) means offset mod N2. N2 in 1000..1255 means the distance to the last occurrence of N2-1000 (e.g. c0,1010 means how far from the last linefeed). N3 and up specifies byte masks starting with the most recent context byte (e.g. c0,2,0,255 means offset mod 2 combined with the second context byte (sparse model)). A value of 256..511 includes the byte aligned LZ77 parse state if applicable (e.g. c0,0,511 means the order 1 context plus parse state hashed together).

i followed by a list specifies a chain of ISSE components with each context order increasing by the specified amount by hashing it with the previous component, (e.g. ci1,1,1,1,2 specifies an order 0 ICM chained with order 1, 2, 3, 4, 6 ISSE). Each ISSE (indirect secondary symbol estimator) adjusts the prediction of the previous component in the bit history of the current context (hashed together with the previous component’s context).

a specifies a match model, which predicts the bit which followed the most recent occurrence of the current (normally high order) context. It can take parameters specifying buffer size, hash table index size and context order.

wN1 specifies a word model, an ICM-ISSE chain of increasing order from 0 to N1-1 in words rather than bytes. A word is defined as a sequence of letters converted to upper case, ignoring all other characters (e.g. w2 specifies an order 0 ICM and order 1 ISSE). It can take additional parameters specifying an alphabet range and a mask to convert case.

m specifies a mixer, which adaptively averages the predictions of all prior components. It can take a parameter (default 8) which is the number of bits of context to select the mixing weights (e.g. m16 is a byte-wise order 1 context). It takes additional parameters specifying update rate.

t is a MIX2 2-input mixer which averages just the last 2 components.

s is a SSE which adjusts the prevous prediction like an ISSE but using a direct context instead of a bit history. It takes parameters specifying the number of context bits (e.g. s19 selects the current and previous bytes and the 3 high bits of the second byte), and additional parameters specifying initial and final update rates.

-m is short for -method. -th 1 (-threads 1) selects 1 thread. The default on the test machine is 4 (2 cores + 2 hyperthreads). It is also used in decompression to reduce memory.

The following table shows compression with the config file max5.cfg (Oct. 14, 2013). This is the same model as max_enwik9.cfg except that it was modified to take an argument to double memory usage for most of the components for each increment. With argument 0, it is the same as max_enwik9. Compression was with zpaqd 6.33 (June 20, 2013), which is the developement tool that accompanies zpaq and produces streaming mode archives from a config file. Thus, the command “zpaqd c max5 3 archive enwik9” compresses to archive.zpaq with 3 passed to $1 in max5.cfg. This has the effect of using almost 8 times as much memory for both compression and decompression as max_enwik9. The archive was decompressed with both zpaq 6.42 (Sept. 26, 2013) and with tiny_unzpaq (Mar. 21, 2012, public domain) compiled with g++ 4.1.2 -O3 under Linux on the test machine, which has 20 GB of available memory. zpaq 6.42 is an archiver like zpaq 6.33 with a number of added features and bug fixes unrelated to compression. tiny_unzpaq is a stand-alone program that extracts only streaming mode archives and is designed so that the source code is as small as possible. It does not support JIT compilation of the ZPAQL code, or multithreading and has no error checking or help message. It takes an archive as an argument with no options and extracts to the saved names. max6.cfg (Oct. 15, 2013) modifies max5 by rewriting the word model and adding models that count brackets (“[“ minus “]” in range 0..2) and a column model (counts bytes after the last linefeed in range 0..64). It also changes the memory parameter from$1 to $3 so it can be passed to zpaq like “-m s10.0.5fmax6”. This means to choose streaming mode (s), a block size of 2^10 MB (10), no preprocessing (0), pass 5 as$3 selecting 14 GB (or 1 selecting 1.4 GB) using max6.cfg. For this test, tiny_unzpaq is used to extract when the decompresser is given as “sd” although either program could be used.

zpaq 6.50, Mar. 21, 2014, uses 5 compression levels instead of 6. LZ77 when used in methods 2 and higher uses a suffix array to find matches. There are also other improvements in sorting files, grouping into blocks, detecting file type, detecting random data, and selecting compression algorithm based on type. Tests below used 4 threads.

.1440 drt|lpaq9m

lpaq versions 1 through 8 may be downloaded here. lpaq9* can be downloaded here or as a zpaq archive. The decompr8 series of Hutter prize entries (decompresser and enwik8 archive) are also listed here because they followed a period of development of the lpaq series.

Note: some of these programs are compressed with upack, which compresses better than upx. Some virus detectors give false alarms on all upack-compressed executables. The programs are not infected.

lpaq1 is a free, open source (GPL) file compressor by Matt Mahoney, July 24, 2007. It uses context mixing. It is a “lite” version of paq8l, about 35 times faster at the cost of about 10% in compression. The “9” option selects maximum memory. The options range from 0 (6 MB) to 9 (1.5 GB). Memory usage is 3 + 3*2N MB, N = 0..9.

The compressor mixes 7 contexts: orders 1, 2, 3, 4, 6, a unigram word context (consecutive letters, case insensitive), and a matched bit context. The contexts (except the matched bit) are mapped to nonstationary bit histories using nibble-aligned hash tables, then mapped to bit prediction probabilities using stationary adaptive tables with bit counts to control adaptation rate. The matched bit context maps the predicted bit (based on a context match), match length and order-1 context (or order 0 if no match) to a bit prediction. The probabilities are combined in the logistic domain (log(p/(1-p)) using a single layer neural network selected by a small context (3 high bits of last byte + context order), then passed through 2 SSE stages (orders 0 and 1) and arithmetic coded. Except for one model for ASCII text, there are no specialized models for binary data, .exe, .bmp, .jpeg, etc.

lpaq2 by Alexander Rhatushnyak, Sept. 20, 2007, contains some speed optimizations.

lprepaq 1.2 by Christian Schnaader, Sept. 29, 2007, is lpaq1 combined with precomp as a preprocessor. precomp compresses JPEG files and also expands data segments compressed with zlib, often making them more compressible. This preprocessing has no effect on text files.

lpaq3 and elpaq3 by Alexander Rhatushnyak, Sept. 29, 2007, has two versions with the same source code. When compiled with -DWIKI, the result is elpaq3 which is tuned for large text files. The normal compile produces lpaq3.

lpaq3a by Alexander Rhatushnyak, Sept. 30, 2007, improves compression on some files over lpaq3 (but not enwik8/9). The archive also contains lpaq3e.exe, which is an archive compatible (Intel compile) of elpaq3.exe.

lpaq4 and lpaq4e (mirror) are by Alexander Rhatushnyak, Oct. 1, 2007. lpaq4e is tuned for large text files.

lpaq5 and lpaq5e are by Alexander Rhatushnyak, Oct. 16, 2007. Option 9 selects 1542 MB memory. lpaq5e is tuned for large text files. It includes separate programs for compression only (lpaq5e-c.exe) and decompression only (lpaq5e-d.exe). Tests were done with these programs, rather than the version that does both (lpaq5e.exe).

lpaq6 and lpaq6e are by Alexander Rhatushnyak, Oct. 22, 2007. Option 9 selects 1542 MB memory. lpaq6e is tuned for large text files. lpaq6 includes a E8E9 transform for compressing x86 executables.

lpaq7 and lpaq7e (mirror) are by Alexander Rhatushnyak, Oct. 31, 2007.

lpaq8 and lpaq8e are by Alexander Rhatushnyak, Dec. 10, 2007. The executables are packed with upack. zip -9 would make them larger.

lpaq1a by Matt Mahoney, Dec. 21, 2007, uses the same model as lpaq1 but replaces the arithmetic coder with the asymmetric binary coder from fpaqb.

lpq1 by Matt Mahoney, Dec. 23, 2007, is an archiver (not a file compressor) based on lpaq1 option 7.

drt|lpaq9e is by Alexander Rhatushnyak, Feb. 20, 2008. It is specialized for English text. It includes a separate program drt.exe (without source code) which performs a dictionary transform prior to compression with lpaq9e. The option 9 is for lpaq9e which selects maximum memory. The program size is computed by adding lpaq9e.exe, drt.exe, and the compressed dictionary, which must be uncompressed with lpaq9e before running. The size is smaller without a zip archive. Decompression consists of uncompressing the dictionary with lpaq9e, uncompressing the transformed file with lpaq9e, and reversing the transform with drt. Run times are for the sum of all three operations (1+62+2943, 1+2929+45 sec).

lpaq9f by Alexander Rasushnyak, Apr. 27, 2007, works like lpaq9e. Run times are (2+55+2801, 2+2819+38 sec). drt uses 8 MB for compression and 4 MB for decompression.

lpaq9g by Alexander Rasushnyak, May 23, 2008, works like lpaq9e. Run times are (2+51+2691, 2+2682+38 sec).

lpaq9h by Alexander Rasushnyak, June 3, 2008, works like lpaq9e. Run times are (2+53+2530, 2+2529+44 sec).

lpaq9i by Alexander Rasushnyak, June 13, 2008, works like lpaq9e. Run times are (2+59+2425, 2+2453+46 sec). drt.exe and the dictionary file (tmpdict0.dic) are unchanged in all versions starting with lpaq9f.

lpaq9j by Alexander Rhatushnyak, Aug. 17, 2008, has a new version of drt.exe and dictionary. Run times are (2+58+2365, 2+2358+48 sec).

lpaq9k is by Alexander Rhatushnyak, Sept. 30, 2008. Run times are (2+59+2336, 2+2346+47 sec). decompresser size is as 3 files (not zipped).

lpaq9l is by Alexander Rhatushnyak, Dec. 2, 2008. Run times are (2+41+2132, 2+2179+40 sec) on the computer described in note 26, and (2+58+2338, 2+2422+50) on the computer used to test all the earlier versions. decompresser size is as 3 files (not zipped).

lpaq9m (zpaq archive) is by Alexander Rhatushnyak, Feb. 20, 2009. Run times are (2+38+2067, 2+2111+38). decompresser size is 3 files (not zipped).

decomp8 is a Hutter Prize entry by Alexander Rhatushnyak, Mar. 23, 2009. It consists of a decompresser (Windows executable only) and an archive (archive8.bin) which decompresses to enwik8. There is no compressor. During decompression, the program creates a temporary file containing a dictionary similar to the one used in paq8hp12 and by drt. The command to decompress is “decomp8 archive8.bin enwik8”. The total size (not zipped) is 15,986,677 bytes.

decomp8b is an update to the Hutter prize entry decomp8 by Alexander Rhatushnyak, Apr. 22, 2009. Total size (not zipped) is 15,958,674 bytes.

decmprs8 is an update to the Hutter prize entry decomp8b by Alexander Ratushyak, May 23, 2009. Total size (not zipped) is 15,949,688 bytes. To decompress: decmprs8.exe archive8.dat enwik8

drt may be combined with other compressors to improve compression. The following were obtained using drt and tmpdict0.dic (from lpaq9i) with ppmonstr J (PPM). Option -m1650 selects 1650 MB memory. -r1 partially rebuilds the model when memory is exhausted. -o select the PPM model order. Compression time is for ppmonstr only. Mem8 is actual memory used to compress enwik8.drt. enwik9.drt always uses 1650 MB. As a separate compressor, the compressor size would be 147,915 for a zip file containing drt.exe, ppmonstr.exe, and tmpdict0.pmm (tmpdict0.dic compressed with ppmonstr -m1650 -r1 -o64). Total size would be 148,047,289.

For drt 9j, the decompresser size is 149,468 and total size is 147,196,757.

The following shows the effects of drt from lpaq9m on enwik8. The first numeric column is the compressed size of enwik8. The second is the compressed size of the uncompressed dictionary (lpqdict0.dic, 465,210 bytes) concatentated with enwik8.drt (61,289,634 bytes) using compressor versions that were current as of June 26, 2010 unless indicated. The ratio shows the improvement due to preprocessing. The dictionary contains 44880 lowercase words. DRT replaces word occurrences with codes of 1 to 3 bytes and uses codes to indicate capitalized words or letters.

.1449 mcm

mcm v0.0 is a free, experimental, closed source file compressor by Mathieu Chartier, June 4, 2013. It uses CM. Options -1 … -9 select 8 MB to about 1500 MB memory.

mcm v0.2, June 11, 2013, has automatic detection of text and binary files with UTF modeling in text mode and sparse models in binary mode, an improved match model, and cache optimizations.

mcm v0.3 was released June 17, 2013.

mcm 0.4 was released as open source on July 17, 2013. To test, it was compiled with g++ 4.8.0 using the supplied make.bat file.

mcm 0.8 (discussion), was released Feb. 5, 2015. It uses LZP preprocessing with fast and high modes. The high mode (default, as tested) uses 8 context models and the fast uses 6. It was compiled in Linux/g++ 4.8.2 using the supplied make.bat file. Option -10 uses 2.9 GB memory. Option -11 (5.5 GB) was not tested.

mcm 0.82 was released Feb. 16, 2015. -max selects best compression (default is -high).

mcm 0.83 was released Apr. 5, 2015. -x10 and -x11 select the memory used for max compression. To test -x10, I compiled from source using the supplied make.sh in Ubuntu, g++ 4.8.2. -x11 was tested using optimized source with comments removed.

.1493 nanozip

nanozip 0.01a is a free, experimental, closed source GUI and command line archiver by Sami Runsas, July 14, 2008. For these tests, the command line version (smaller executable) was used. It compresses using several algorithms (fastest to best): LZP (options -cf and -cF), LZ77 (-cd, -cD), BWT (-co, -cO, uses 5N block size) and CM (-cc). The uppercase options (-cF, -cD, -cO) compress better but slower than the corresponding lowercase options and may use more memory. The default compression mode is -co (fast BWT). -m1500m selects 1500 MB memory, although the reported memory usage may differ and the actual memory usage (Cmem, Dmem, in MB) measured with Task Manager is usually lower than reported. The program will use less memory depending on available physical memory when run. -forcemem was used to override this. For all tests, -nm was used to turn off checksums and not store timestamps or file permissions. For -cO, the program uses a LZ77 variant (called LZT) instead of BWT for binary files. -txt is an optimization for text files with -co or -cO.

nanozip 0.03a was released July 31, 2008. Only -cc was tested.

nanozip 0.05a was released Oct. 20, 2008. Options are as in 0.01a and include -nm -forcemem.

nanozip 0.06a was released Feb. 13, 2009. Options are as in 0.01a and include -nm -forcemem. w32c creates a self extracting archive (.exe file).

nanozip 0.08a was released June 3, 2010. _64 refers to the Windows 64 bit version. w32c means to produce a self extracting archive. -nm means do not store metadata or redundancy information. -cc selects a context mixing model. -m2.6g means use 2.6 GB memory. enwik8 was tested with -m2g (uses 1670 MB).

nanozip 0.09a was released Nov. 4, 2011. Option w32c selects a self extracting archive, so the decompresser size is 0. Option -p4 runs multithreaded compression on 4 processors. Tested under 64 bit Linux.

.1494 cmv

cmv 00.01.00 is a free, closed source, experimental file compressor for 32 bit Windows by Mauro Vezzosi, Sept. 6, 2015. It uses context mixing. Option “2,3,+” selects max compression (2), max memory (3), and a large set of models (+). A hex bitmap for this argument turns individual models on or off. Note 48 timings are for enwik8 only.

cmv 00.01.01 was released Jan. 10, 2016. It is compatible with 00.01.00 and does not change the compression ratio.

cmve 0.2.0 was released Nov. 28, 2017.

.1512 xwrt

xml-wrt 2.0 is a free command line file compressor with source available, by Przemyslaw Skibinski, June 19, 2006. It uses LZMA (LZ77 + arithmetic coding) with preprocessing for modeing text, XML tags, dates, and numbers. It may also be used as a preprocessor for input to other compressors. Version 1.0 was strictly a preprocessor without built-in compression.

The -l6 option selects maximum LZMA compression. -b255 selects maximum buffer size of 255 MB for building a dynamic dictionary. -m255 selects maximum memory. -s turns off spaces modeling. -f8 sets the minimum word frequency for dictionary inclusion to 8 (default is 6).

xml-wrt 3.0 (Sept. 14, 2006) includes a stripped-down version of PAQ8 (-l11 option) in addition to LZMA compression.

xwrt 3.2 (Oct. 29, 2007) is a dictionary preprocessor frontend to LZMA, PPMVC and lpaq6 as well as a standalone preprocessor. Option -l14 selects lpaq6 option 9 (1542 MB). -b255 selects 255 MB memory (maximum) for building the dictionary. -m96 selects 96 MB buffer during compression. (Higher values cause out of memory error). -s turns of space modeling. -e40000 limits the dictionary size to 40000 words. -f200 limits the dictionary to words that occur at least 200 times.

xml-wrt 2.0 and higher and xwrt 3.2 can be used as either a standalone compressor or as a preprocessor to other compressors. The table below shows the best known settings for enwik9 and enwik8 for xml-wrt 3.0 and 2.0 as a preprocessor to ppmonstr var. J, the best known combination for which xml-wrt improves compression. xml-wrt 1.0 is a preprocessor only. See also xml-wrt and xwrt as a standalone compressor.

xml-wrt 1.0 (XML Word Reducing Transform) is a free command line single file preprocessor with source code by Przemyslaw Skibinski, May 10, 2006. It is not intended to compress files by itself (although it does somewhat). Rather, it is intended to improve the compressibility of text and XML files by replacing common words and XML substrings with shorter symbols. (So it is actually LZW with a static dictionary prepended to the output). It improves compression for most programs except for those that already have English text models such as paq8h. Some additional results are shown below for combinations with some other compressors.

The following table shows the compressed size (without decompresser except SFX) of enwik8 before and after the XML-WRT transform with option -f180 for several compressors. A ratio less than 1 means that XML-WRT improves compression.

The -f option (default -f6) selects the minimum word frequency required to have it added to the dictionary. The optimal setting depends on the input size. When used with ppmd or ppmonstr (the best compressors improved by XML-WRT), the optimal settings are about -f180 for enwik8 and -f1800 for enwik9, which results in a dictionary of 7697 words for enwik8 and 6657 words for enwik9. The following table shows the effect of the -f and -o options for ppmonstr -m800 enwik9. The best combination found is -f1800 -o8.

The following table shows that the optimal setting for -f is lower for smaller files (with ppmd):

The default values of -s (disable spaces model) and -t (disable try smaller word) appear to work best on this data.

xml-wrt 2.0 released June 14, 2006 (updated June 19, 2006) has additional transform options, and also includes LZ77 (zlib) and LZMA (LZ with arithmetic coding) compression. When used as a preprocessor, this compression is turned off. enwik9 was compressed using the options:

The option -l0 turns off compression. -w turns off word containers. -s turns off space modeling (this hurts compression in version 1.0 but helps in 2.0). -c turns off word and number containers (independent of -w and -n. -n hurts compression). -b255 sets memory for the dictionary to 255 MB, the maximum. -m100 sets the memory buffer to 100 MB, which is not maximum (255 MB), but larger values hurt compression. -e10000 sets the dictionary size to 10000 words. (The dictionary size can also be controlled with -f as in version 1.0, but using -e is less dependent on input size so it helps with enwik8). Additional tests showing the effects of -e, -m, and -o:

The optimal values of -w -c -s -n (turn off number containers) and -t (turn off try shorter words) was determined on enwik7 and enwik8 but not tested on enwik9.

A bug fix for LZMA compression, released June 19, 2006, does not change any values for the June 14, 2006 version (using the -l0 option). However the compressed source code increases from 25,290 bytes to 25,354 bytes. The June 14 version is no longer published. The URL is unchanged.

xml-wrt 3.0 (Sept. 14, 2006) option -3 means to optimize the default settings for PPM compressors. Version 3.0 also has a FastPAQ8 compressor for standalone compression which was tested separately.

xwrt 3.2 (see below) with ppmonstr J has the following results.

ppmonstr option -o64 is optimal for enwik8, but -o10 is optimal for enwik9. -m1650 selects 1650 MB memory. xwrt option -2 optimizes for PPM. -b255 selects buffer size 255 MB for building the dictionary. -m255 selects 255 MB memory buffer. -s turns off space modeling. -f64 sets minimum word frequency for the dictionary to 64. Program size and times are xwrt + ppmonstr. Memory usage is 512 MB for xwrt, 1650 MB for ppmonstr.

### .1532 fp8_v3

fp8 v1 (fast paq) is a free, open source archiver by Jan Ondrus, May 2, 2010. It is derived from pax8px_v68. It has fewer models than paq8px for better speed but retains the models for wav, bmp, and jpg. The option -8 selects maximum memory.

fp8 v2, Apr. 10, 2012, has some modeling improvements.

fp8 v3, May 13, 2012, has some more compression improvements (at a slight cost in speed) and a JPEG bug fix.

tangelo 1.0, June 17, 2013, is a single-file compressor based on fp8. It removes specialied models and preprocessors for exe, bmp, wav and jpeg types. It takes no options. It uses fixed memory of 567 MB, equivalent to fp8 -7.

tangelo 2.0, July 6, 2013, removed some models and made other simplifications for better speed and less memory but worse compression.

tangelo 2.1, July 20, 2013, faster with less compression.

tangelo 2.3, July 22, 2013, re-added APM for better compression, and minor changes for better speed.

.1563 WinRK

WinRK 3.0.3 is a commercial GUI archiver by Malcolm Taylor (Mar. 6, 2006). It is top ranked on some benchmarks. Unfortunately it is not available for free download (as of May 16, 2006). The “free trial” expires as soon as you install it. (Update, Sept. 11, 2006: versions 3.0.2 and 3.0.3 are no longer available for download. They appear to have been withdrawn last month). WinRK in PWCM mode (Paq Weighted Context Modeling) is based on the paq7/8 algorithm with text dictionary preprocessing and specialized models for wav, bmp, and exe files. Version 3.0.2 was based on the earlier paq6 algorithm which uses adaptive linear model mixing rather than a neural network which mixes bitwise predictions from models in the logistic (log p/(1-p)) domain. The +td and -td options turns English dictionary preprocessing on or off respectively. 800MB selects the memory limit. When not specified, PWCM appears to allocate all available memory except leaving 8 MB.

RK and RKC are predecessors of WinRK so I don’t plan to test them.

### .1570 ppmonstr, ppmd, ppms

ppmonstr, ppmd, and ppms var. J are free command line file compressors by Dmitry Shkarin (model) and Dmitry Subbotin (range coder), Feb. 16, 2006. (ppms on Feb. 21, 2006). ppmonstr is a slower, experimental version of ppmd with better compression. Source code is available for ppms and ppmd but not ppmonstr. ppms is a small memory (1 MB) version of ppmd. They all use PPMII (PPM with information inheritance). The -m256 option selects 256 MB memory (maximum for ppmd). The -o10 option selects PPM order 10. (Higher orders use up memory faster which hurts compression). When ppmd runs out of memory, it discards the model and starts over. The -r1 option (default in ppmonstr) tells ppmd to back up and partially rebuild the model before resuming compression. The default options for ppmd are -m10 -o4 -r0 which are designed for reasonably good compression with high speed and low memory usage (see table below).

ppms accepts only options -o2 through -o8. The default is -o5. This also gives the best compression on enwik8. Task Manager shows 1.8 MB memory used.

ppmd was updated to J1 on May 10, 2006 to fix a bug. Compression benchmarks are unchanged except the size of the compressor (11,099 bytes as zipped source code). ppmonstr is unchanged.

### .1593 zcm

zcm v0.01 (discussion) is a free, experimental, closed source compressor for 32 bit Windows by Nania Francesco Antonio, Dec. 16, 2011. It uses context mixing. Commands c1 through c7 select memory usage for compression. Decompression uses the same memory. c7 uses the most memory and gets the best compression.

zcm v0.02 was released Dec. 23, 2011.

zcm v0.03 was released Dec. 28, 2011.

zcm v0.04 was released Jan. 30, 2012. (Program banner says v0.03).

zcm v0.11 was released Feb. 19, 2012. It is described as mixing 6 contexts. It detect file type and uses exe, delta, and LZP preprocessors. It has separate models for text and binary data. Speed and memory usage are the same for compression and decompression. Commands c0 through c7 select memory usage. Each increment doubles memory, resulting in better compression. Memory is used slowly as the program runs up to a maximum value which is not reached on enwik8 for c5 and higher. For enwik8, c7 uses 1286 MB rather than 1716 MB.

zcm 0.20b was released Apr. 4, 2012. It is an archiver rather than a single file compressor. Option -m7 selects maximum memory usage (range 32 MB to 1.7 GB).

zcm 0.30 was released May 2, 2012.

zcm 0.40 was released May 16, 2012. It is described as using CM with 6 contexts, a mixer, and one re-mixer (APM or SSE) to adjust the mixer output. It uses LZP preprocessing.

zcm 0.50a was released June 2, 2012.

zcm 0.60d adds multithreading and other improvements. The -t option selects the number of tasks. -t0 auto-detects the number of cores, which is equivalent to -t2 on the dual core test machine (T3200, 3 GB). The default is -t1. The -m option selects memory usage from -m1 (46 MB per task) to -m7 (1.6 GB per task). The default is -m4. Parallel compression is performed by separate processes that can independently access 2 GB of memory each in 32 bit Windows. When run with -t2, there is also a third task using 5 MB of memory. All three tasks saturate one CPU core each. It was found that -t2 makes compression worse (probably by splitting the input in half and compressing each separately) and is not much faster than -t1. The -t option can also be given during extraction. If the archive was compressed with -t2 then extraction with -t2 doubles memory usage but only improves speed slightly. If compressed with -t1 then extraction with -t2 is 4 seconds slower for enwik8 than with -t1 because the extra task exits immediately and the third 5 MB task continues to run.

zcm 0.70b was released Oct. 14, 2012.

zcm 0.80 was released May 15, 2013. It was tested in Linux under Wine. When -t2 was used to compress in 2 threads, it was also used to extract.

zcm 0.88 (discussion) was released June 21, 2013. It was tested both in Windows and in Linux under wine.

zcm 0.90 was released May 3, 2014.

zcm 0.92 was released May 16, 2014. A 64 bit Windows version was released July 3, 2014. It supports the undocumented -m8 option using up to 3 GB memory.

zcm 0.93 was released May 12, 2015.

.1598 slim

slim 23d is a free, closed source command line archiver by Serge Voskoboynikov, Sept 21, 2004. It uses a PPMII core (ppmd/ppmonstr) by Dmitry Shkarin with filters for special file types including text. The -m700 option selects 700 MB of memory. (I found -m800 causes disk thrashing at 1 GB). The -o10 option selects order 10 PPM. (-o12 and -o16 caused slim to fail on enwik9, creating an empty archive and exiting after about 60% completion with 1 GB. Smaller files were OK. There was no error with 2 GB).

As with other PPM compressors (ppmd, ppmonstr), using a higher order improves compression but consumes memory faster. For enwik8, -o32 is optimal with 700MB available, but lower orders are better for enwik9.

.1605 bwmonstrbwmonstr 0.00 is a free, experimental, closed source file compressor by Sami Runsas, Mar. 10, 2009. It uses BWT. The program takes no options. It loads the input file into a single block and allocates 1.25 times the block size in memory for either compression or decompression. Thus, it is able to transform enwik9 in a single block.

bwmonstr 0.01 was released Mar. 18, 2009.

bwmonstr 0.02 was released July 8, 2009. It uses a compressed representation internally, thus memory usage is less than the 1 GB block size. It compresses the entire input file in a single block and requires enough memory to hold the file. The program is multi-threaded even on a single block. Times shown are for a single core processor, but would be faster on a multi-core processor. reorder2 is an alphabet reordering program by Eugene Shelwien. drt is the dictionary preprocessor from lpaq9m by Alexander Rhatushnyak

.1617 nanozipltcb

nanozipltcb is a free file compressor by Sami Runsas, July 25, 2008. It uses BWT. It takes no options. It is a customized version of nanozip, similar to -cO -txt -m1700m, but tuned to this benchmark. Files compressed with nanozipltcb are not compatible with nanozip.

nanozipltcb 0.08, Mar. 3, 2010. is multithreaded and has other optimizations. Size is based on a self extracting archive. Only a 64 bit Windows version exists. Tested by the author on a quad core Q6600 at 3.0 GHz. The older version is withdrawn.

nanozipltcb 0.09, was relased May 10, 2010. It has only a 64 bit Linux executable version.

.1637 M03

M99 (mirror) is a free file compressor by Michael Maniscalco, originally written in 1999 and ported to Windows on Mar. 27, 2007. It uses BWT, based on MSufSort 3.1. M99 is a predecessor to M03. Command line is:

Blocksize can be specified in bytes (like 10000), kb, mb etc as 100m or 100k. Memory requirement for compression is 6 times the blocksize maximum, although in most cases only a little over 5 times blocksize is used. Blocksize 239m divides enwik9 into 4 approximately equal parts and requires about 1500 MB memory.

Version 2.1 was released Apr. 19, 2007.

M99 2.2.1, released July 18, 2008, has an optimization to compress the contents of TAR files separately. For other files, it increases the size by 1 byte.

M03 v0.2a, Oct. 10, 2009, takes just one option, which is the block size in bytes. Memory usage is 6x block size for compression and 5x for decompression.

M03 v1.1 beta was released Oct. 24, 2011 for 64 bit Windows. It includes some new, fully parallel suffix sorting and BWT construction algorithms. The option 1000000000 specifies a single block requiring 5 GB memory to compress or decompress.

.1638 glza

tree 0.1 is a free, experimental, open source compressor by Kennon Conrad, Mar. 31, 2014. It is a general purpose compressor optimized to compress text. The compressor is 3 separate programs. The first, TreeCapEncode.c, converts upper case letters to lower case plus special symbols. It takes 4 minutes. The second, TreeCompress.c, uses a suffix tree to parse the input into tokens. It takes 3 days, 21 hours, 37 minutes and uses 1850 MB memory. The third, TreeBitEncode.c encodes the tokens using variable length codes. This takes 27 seconds. The decoder, TreeDecode.c, takes 22 seconds using 400 MB memory. Compressed size depends on available memory; thus results below are machine dependent.

tree 0.3 was released Apr. 27, 2014. It uses a model that only parses whole words with a leading space.

tree 0.4 was release May 21, 2014.

tree 0.5 was released May 25, 2014.

tree 0.9 was released July 5, 2014. It includes a multi-threaded decompression program for better speed. TreeCapEncode.c is now TreePreEncode.c and run in 11 seconds.

tree 0.10 was released Aug. 15, 2014. Timings for each step are: TreePreEncode 20 s, TreeParagraphs 1485 s, TreeWords 393 s, TreeCompress 70732 s, TreeBitEncode 33 s, total 72663 s.

tree 0.11 was released Sept. 2, 2014. It uses extra symbol tables to improve compression ratio and decompression speed.

tree 0.12 was released Sept. 29, 2014 with a bug fix on Oct. 1, 2014. For note 48, the program was compiled with gcc 4.8.2 -O3.

tree 0.13 was released on Oct. 12, 2014. There is a 32 bit version that uses 1700 MB memory and a 64 bit version of TreeCompress.exe that uses 6x the input size in memory. The option (P+W+C) means that the two preprocessing stages TreeParagraph.exe and TreeWords.exe (same for 32 and 64 bit) were run on the input prior to TreeCompress.exe or TreeCompress64.exe. Otherwise only the last stage is run. The preprocessing stages make compression worse but faster.

tree 0.14 was released Oct. 29, 2014. The 64 bit version was tested.

tree 0.15 was released Nov. 21, 2014. 0.15a, Nov. 22, 2014, has a faster decompressor.

tree 0.16b was released Dec. 9, 2014.

tree 0.17 was released Dec. 16, 2014. Compression times an memory usage are approximate (unchanged since last version).

tree 0.18 was released Jan. 17, 2005 with improvements to the 64 bit version. The -r option controls memory usage.

tree 0.19 was released Feb. 4, 2015.

glza 0.1 is the new name of the tree program, released Apr. 27, 2015. It uses adaptive order 0 arithmetic coding of dictionary symbols and other changes.

glza 0.2 was released May 24, 2015.

glza 0.3 was released July 13, 2015. Decompression requires 330 MB memory.

glza 0.3b was released Nov. 16, 2015. It contains the same files as v0.3a (a bug fix for v0.3) except that it also contains GLZAcompressFast (.c and .exe), which was tested below.

glza 0.4 was released Mar. 11, 2016.

glza 0.8 was released Sept. 27, 2016. The option -p3 selects a factor to favor longer strings over more compressive.

glza 0.10.1 was released Jan. 6, 2018.

.1639 bcm

bcm 0.03 (discussion) is a free command line compressor by Ilia Muraviev, Feb. 9, 2009. It uses BWT with a fixed block size of 32 MB and an order 0 CM back end. It takes no command line options.

bcm 0.04 (discusion) was released Feb. 11, 2009. It increases the block size to 64 MB and has modeling improvements including interpolated SSE.

bcm 0.05 (discussion) was released Mar. 5, 2009. The option -b327680 selects 327680 KB block size. It uses 5x block size memory.

bcm 0.07 (discussion) was released Mar. 15, 2009.

bcm 0.08 (discussion) was released May 31, 2009. The command e370 means to use a block size of 370 MB. Memory usage is 5 times block size. Larger values gave an “out of memory” error under 32 bit Windows Vista with 3 GB memory. reorder v2 (discussion) is an alphabet reordering preprocessor for BWT compressors by Eugene Shelwien, May 26, 2009. xlt is a pair of 256 byte files that defines the alphabet permutation used by reorder, released June 4, 2009 by Eugene Shelwien.

bcm 0.09 (discussion) was released Aug. 19, 2009. Option -b328 selects a block size of 328 MB. Memory usage is 5 times block size for both compression and decompression.

bcm 0.10 x64 x86 was released Dec. 11, 2009. Discussion The x64 version is for 64 bit Windows. The x86 version is for 32 bit Windows. The -b option gives the block size in MB. Memory usage is 5x block size.

bcm 0.11 (discussion) was released June 22, 2010. It is described as a complete rewrite.

bcm 0.12 (discussion) was released Oct. 31, 2010. A 64 bit version was tested by the author with -b1000 on June 1, 2011.

bcm 0.14 (discussion) was released June 22, 2013. Only a 64 bit Windows version was released. Command c1000 means to compress in 1000 MB blocks.

bcm 1.00 (discussion) was released as open source (public domain), Mar. 2, 2015. It was tested by compiling with g++ 4.8.2 -O3 in Linux.

.1640 bsc

bsc 1.00 x86 x64 is a free, experimental file compressor by Ilya Grebnov, Apr. 7, 2010. It uses BWT with LZP preprocessing. The option -b1000t selects a block size of 1000 MB and turns off multithreading (parallel compression on multiple cores). Memory requirements is 6x block size times number of threads. Multithreading was turned off (-t) for both compression and decompression in order to maximize compression. Nevertheless, compression shows CPU utilization of 109% on 2 cores even with -t set. -p turns off LZP preprocessing. -m2 selects a sort (Schindler) transform of order 5.

Other options select LZP table size (default 218 bytes, range 10..28), LZP match length (default 128, range 4..255), block sorting algorithm (default BWT, possible order 4 or 5 sort (Schindler) transform), and preceding or following context for sorting (default following). Only the defaults were tested, which may not be optimal. There are two versions: x86 for 32 bit Windows with a 2 GB memory limit, and x64 for 64 bit Windows with no memory limit. Notes apply to enwik9. enwik8 size is tested as in note 26.

bsc 1.03 x86 and x64 (discussion), Apr. 11, 2010, are bug fixes that do not change results except for the size of the program. The x64 version is 276,292 bytes.

bsc 2.00, May 3, 2010, is available with source code licensed under LGPL.

bsc 2.20, June 15, 2010, has speed improvements for multi-core support. -b1000p means use 1000 MB block size (-b1000, requires 5 GB memory) with no preprocessing (-p). -b80p uses 80 MB block size with no preprocessing. -m2f means use sort transform order 5 (-m2) and fast compression (-f). enwik8 was tested as in note 26 on bsc-x32 replacing -b1000p with -b100p.

bsc 2.26, July 26, 2010, has some speed improvements but retains compatibility with version 2.25. -b328 selects a block size of 328 MB, which divides enwik9 into 3 blocks. This is the fewest number of blocks supported by the x86 version because of a 2 GB process limit. The x64 version does not have this limit but requires 64 bit Windows. -t disables parallel block processing, which would double the memory requirement. -T disables all multicore processing. This gives a smaller compressed size but is slower than -t. -T or -t must be specified during decompression to prevent an out of memory error. With -t, CPU usage is 156% for compression and 129% for decompression on a dual core T3200 (2 GHz, 3 GB, Vista 32 bit).

bsc 2.4.5, Jan. 3, 2011, improves the speed of decompression. It remains compatible with the previous version.

bsc 2.5.0, Mar. 20, 2011, had no significant changes for the tests performed. Minor performance enhancements. CRC32 is replaced with Adler32.

bsc 3.0.0, Aug. 27, 2011 adds experimental NVIDEA (CUDA) GPU acceleration for forward sort transforms ST5 through ST8. ST7 and ST8 are GPU only. There are 32 and 64 bit versions. For the test shown, the 64 bit version was used. -b32 means to select 32 MB block size, -p disables preprocessing, -m8 selects order 8 sort transform, and -f selects fast compression. The test machine is a Core-i7 2600K (4 cores, 8 threads, 8 MB cache) overclocked from 3.4 GHz to 4.6 GHz, with a 384 CUDA processor GeForce 560Ti GPU, overclocked from 822 MHz to 900 MHZ, with 2000 MHz memory speed. Compression takes 8.705 seconds using 1129 MB CPU memory and about 1 GB GPU memory. Decompression uses only the CPU, taking 18.595 seconds using 1395 MB memory.

bsc 3.1.0 was released July 8, 2012.

.1640 bbbbbb ver. 1 is a free, open source (GPL) command line file compressor by Matt Mahoney, Aug. 31, 2006. It uses a memory efficient BWT allowing blocks up to 80% of available memory. The transformed data is compressed with an order 0 PAQ like model: the previous bits of the current byte are mapped first to a bit history, then through a 6 level probability correcting adaptive chain before bitwise arithmetic coding.

The m1000 command selects 1000 MB block size. Thus, enwik9 is suffix sorted in one block. This is accomplished by sorting 16 smaller blocks, writing the pointers to 4 GB of temporary files, and merging them. The inverse transform is done in memory without building a linked list. Rather, the next position is found by looking up the approximate location in an index of size n/16 and finding the exact location by linear search.

bbb.exe Win32 executable compiled with MinGW g++ 3.4.2 and UPX 1.24w.

bbb Linux executable, supplied by Phil Carmody (Aug. 31, 2006). Compiled with g++-4.1 -Wall -O2 -o bbb bbb.cpp; strip bbb

bbb has a faster mode for both compression and decompression that does a “normal” BWT using 5x blocksize in memory. Output format is the same for fast and slow mode for both compression and decompression. A file compressed in fast mode can be decompressed in slow mode on another computer with less memory, and vice versa. The mode has no effect on the compressed file contents.

Recommended usage for best compression: For files smaller than 20% of available memory, use fast mode and one block. For example, if you have 1 GB memory (800 MB available under Windows) and foo is 100 MB:

If the file is 20% to 80% of available memory, use one block in slow mode. If foo is 500 MB:

If the file is over 80% of memory, use 80% of memory as the block size in slow mode. If foo is 1 GB:

The model requires about an additional 6 MB that should be subtracted from available memory.

bbb results by block size are shown below. Gain is the compression improvement obtained by using a larger block size. Gain(blocksize) is defined as C(blocksize/10)/C(blocksize) - 1 where C(x) means the compressed size of enwik9 with block size x. Compression times are fast modes for block sizes 10 through 108 and slow mode for 109 on a 2.2 GHz Athlon-64 with 2 GB memory under WinXP Home SP2.

.1647 pcompress

pcompress 3.1 is a free, open source (LGPLv3 and MPLv2) deduplicating archiver and file compressor by Moinak Ghosh. A Ubuntu build released Feb. 2, 2015 and updated Feb 6, 2015 was tested. The option “-c libbsc” means to compress a single file using libbsc (BWT). -l14 selects maximum compression (default -l6). -s1000m selects 1000 MB block size (default -s60m). The compression algorithm is deduplication followed by dictionary preprocessing and BWT.

.1652 paq9apaq9a is a free, open source, command line archiver by Matt Mahoney, Dec. 31, 2007. It is a context mixing compressor with an LZP preprocessor to improve speed for highly redundant files. Matches to a context length of 12 or more are coded as 1 bit, and literals as 9 bits. Context mixing differs from paq8 in that it uses a chain of 2-input mixers rather than one mixer with many inputs. It mixes sparse order-1 contexts with gaps of 3, 2, 1, 0, then orders 2 through 6, then text word orders 0 and 1. Option -9 selects maximum memory.

.1662 uda

uda 0.300 is a free, experimental file compressor by dwing, July 16, 2006. It is a modification of PAQ8H with optimizations for speed. It takes no options. The decompresser size is for uda.exe, since this is smaller than the corresponding zip file.

### .1678 BWTmix

BWTmix v1 (from here) is a free, open source, experimental file compressor by Eugene Shelwien, June 28, 2009. It uses BWT (implemented using quicksort) followed by an 8 model CM mixed using a tree of 2-input mixers. The option c10000 selects a block size of 10000 * 100KB. The default block size is 100 MiB. Memory usage is 5x block size.

.1694 lrziplrzip 0.40 is a free, open source file compressor by Con Kolivas, Nov. 26, 2009. It uses a range dictionary preprocessor to remove long range redundancies (based on rzip), followed by lzma (7zip) compression. It also has options to compress with lzo (lzop) or bzip2 after preprocessing, or to output the preprocessed data for compression with other programs. It runs under Linux.

lrzip 0.42 adds zpipe (zpaq cmid.cfg) as a back end compressor using option -z. It was tested in this mode.

lrzip 0.612 (discussion), Mar. 17, 2012, uses the current version of libzpaq (v5.01) for faster execution. The options select built in level 3 (max.cfg) compression.

.1707 cm4_extcm0, cm0_ext, cm1 (discussion), and bwcm (discussion) are a series of free file compressors for Windows by Nauful. cm0 is a context mixing compressor released Dec. 4, 2013. cm0_ext is a slower version of the same program with better compression released Dec. 4, 2013. cm1 uses ROLZ and was released Dec. 5, 2013. bwcm used BWT and was released Dec. 6, 2013. Only bwcm takes any options. The command c128 uses a 128 MB block size. The default is c16. It requires 12x block size in memory for compression and 5x for decompression. All programs are single-threaded.

cm4_ext was released Jan. 21, 2014. It is an order 10 CM with a match model and SSE.

.1722 M1x2

M1 0.2a is a free, open source (GPL) file compressor by Christopher Mattern, released Oct. 3, 2008. It uses context mixing with only two contexts. The contexts are 64 bits with some bits masked out. The masks and several other parameters were selected by a combination of a genetic and hill climbing algorithms running for several hours to 3 days to optimize compression on this benchmark as discussed here.

M1 0.3 was released Jan. 2, 2009.

M1 0.3b was released Apr. 12, 2009. This version takes a configuration file created by an optimization version of the program. The configuration file is required by the decompresser (and is included in the program size).

e8-m103b1-mh is a parameter file for M1 0.3b obtained by mhajicek after about 3 days of CPU time running M1’s genetic optimization program on enwik8.

M1x2 v0.5-1 was released Dec. 8, 2009. The option 6 means to use 48 x 26 MB memory. The option enwik7.txt is an optimization file which resulted from tuning parameters on the first 10 MB of the benchmark by a separate optimization process. It must be specified during decompression. The file size (242 bytes) is included in the decompresser size. The program includes source code and compiled Windows and Linux versions. The Windows version was tested. The program is described as follows by the author:

M1x2 mixes two ordinary M1 models in the logistic domain (thus four models in total). Data is processed bitwise with a flat decomposition. Contexts are mapped to states, which represent bit histories encountered under the corresponding context. In this implementation contexts are restricted to byte masks with some tweaks for text; the context mapping is implemented using hash tables. Two bit history states s1, s2 are quantised Q(.,.) and mapped to a linear counter to produce a prediction p = P(y=1|Q(s1, s2)), where y is the next bit. Afterwards two predictions are transformed into the logistic domain and mixed linearily. The final prediction is: p = Sq[ (St(p2)-St(p1))*w + St(p1) ]; St(.) and Sq(.) name stretch and squash (see PAQ) There is just a single weight w in [0, 1]. The Predictions and the weight are updated to minimize coding cost. As in previous versions a genetic optimzier can tune all degrees of freedom to a training data set. Parameters include: contexts, state machine structure, counter and mixer settings.

m1x2 v0.6 (discussion), Feb. 8, 2010, preprocesses the input by pre-compressing it with an order-1 12 bit length limited Huffman code prior to compression with the context mixing model of v0.5-1. This improves speed by reducing the size of the input and improves compression because the context hash tables are not filled as quickly. The 7 option says to use 8 x 27 MB memory. The decompresser size includes the 242 byte configuration file enwik7.txt. The length limited Huffman codes are generated using an algorithm described by A. Turpin and A. Moffat in Practical Length-Limited Coding for Large Alphabets, The Computer Journal, 38, (5), 339-347, 1995.

.1727 cmm4

cmm1 is a free, open source (GPL) file compressor by Christopher Mattern, Sept. 18, 2007. It uses context mixing with LZP preprocessing.

cmm2 was released Dec. 10, 2007 without source code.

cmm2 080113 was released Jan. 13, 2008 without source code.

cmm3 080207 (test release) was released Feb. 7, 2008 without source code.

cmm4 v0.0 (test release) was released Mar. 14, 2008 without source code.

cmm4 v0.1e was released Apr. 20, 2008 without source code. It takes a 2 digit option “wm” (e.g. 96 meaning w=9, m=6). Memory usage is 2w MB for a sliding window, and 12*2m MB for a context mixing model (order 1,2,3,4,6). On my machine m=7 caused disk thrashing.

Description by the author: CMM4 0.1e Is a variable order context mixing coder, it predicts using the four “highest” (ranking: 643210) models in each bit coding step and, in addition, the match model input. Orders 0 and 1 are implemented using a table lookup, all higher orders use nibble based hashing. Matches are found using order 4 and 6 LZP, the pointers and a quick exclusion hash are stored within the model’s hashing tables. The mixer joins the 4 (or 5 in presence of a match model) predictions and outputs them to a SSE stage. A mixer (similar to (L)PAQ) is selected based on the last byte’s 4 MSBs and on the coding order. The SSE context is made of an order 0 context and qunatized combination of the previous symbol rank, the match length and partially matched symbol. This results in a notable compression increase on redundant data. The model’s counters are quantized using the PAQ’s state machine since CMM4 (will be replaced). Despite the use of hashing most data structures are tuned to never cross a cache line per nibble (the models) or octet (the mixer) (only SSE does). The core compression performance is equivalent to LPAQ1/2, while being faster. In addition there’s a filter framework, which currently implements an x86 transform and will be extended.

.1740 lstm-compress

lstm-compress is a free, experimental open source file compressor by Byron Knoll, June 15, 2017. It takes no options. It uses the LSTM neural network model and dictionary preprocessor from CMIX but omits the other models.

A new version v2 of lstm-compress was released Dec. 12, 2017.

v3 was released Mar. 30, 2019.

.1741 ccmccm 1.03a is one of 3 versions of a free file compressor by Christian Martelock, Feb. 11, 2007. It uses context mixing. The 3 versions are ccm (fastest, uses 17 MB memory), ccm_high (slower but better compression), and ccm_extra (best compression, uses 100 MB memory). The programs take no options.

ccm 1.1.1a (Feb. 23, 2007) has only one version.

ccm 1.1.2a (Mar. 2, 2007) includes a ccm_low version using less memory, which was not tested.

ccm 1.20a (Mar. 21, 2007) has only one version.

ccm 1.20d (Apr. 8, 2007) has two versions: ccm using 99MB memory and ccmx using 210 MB for better compression. Only ccmx was tested.

ccm 1.21 (mirror) (Apr. 22, 2007) includes an option to select memory usage. 7 selects maximum memory, 1300 MB. Only the high compression version (ccmx) was tested.

ccm 1.30 (mirror) was released Jan. 7, 2008. Only ccmx 7 (high compression version, maximum memory) was tested.

.1744 bit

bit 0.1is a free, closed source file compressor by Osman Turan, Dec. 19, 2007. It uses ROLZ optimized for binary files. It takes no options.

bit 0.2b is an archiver, released June 14, 2008. Option -m lwcm selects the compression type (lightweight context mixint). This is the only type supported. Option -mem 9 selects maximum memory. This option ranges from 0 to 9 and uses 3 + 2opt MB memory. The program uses order 1, 2, 3, 4, and 6 context mixing with 2 SSE stages as discussed here. Comments by author:

LWCX (Light-Weight Context Mixing) is a codec of BIT Archiver. It’s designed for getting high compression ratio with acceptable speed (Not enough fast currently). LWCX is a bit-wise context mixing schema which tries to mix order-n models (order 012346). The statistics are gathered by the counters which predict next bit by semi-stationary update rule. After gathering the predictions from all models, a neural network (similar to PAQ’s neural network) tries to output a new mixed prediction. The mixed prediction is processed by a 2D SSE stage which have 32 vertices. Finally, a carryless arithmetic coder codes the given bit with final prediction.

Most of data structures are designed for avoiding cache misses. Order-0 and order-1 models’ statistics stored in a direct lookup table. Higher orders (order 2346) models’ statistics stored in a large hash table. Hash table size can be selected by “-mem N” option (memory usage is 3+2^(N+1) MB, N ranges 0 to 9). The codec locates a hash entry per only coding nibble.

bit 0.7 has options -p=1 through -p=5 to select memory usage of 10 + 20*2p MB.

.1745 mcomp

mcomp x32 v2.00 is a free, closed source, command line file compressor by Malcolm Taylor (author of WinRK), released Aug. 23, 2008. It uses a large number of algorithms, although not the same ones as WinRK. There is a 32 bit version (mcomp_x32.exe) and a 64 bit version (mcomp_x64.exe) for Windows. Only the 32 bit version was tested (in 32-bit Vista). It displays the following help message:

pofile(s) means input file and output file. When run with no compression options, the program decompresses. Test results are as follows on a dual core 2 GHz Pentium T3200 with 3 GB as in note 26.

-mb produces bzip2 compatible format. -M has no effect. Memory usage is fixed at 4 MB.

-mc uses DMC. If memory is greater than -M512, then the program aborts with an assertion failed.

-md and -md64 are supposed to generate deflate and deflate64 formats (zip or gzip). However -mdf and -md64f (fast modes) crash immediately during compression. The other modes decompress to files that are the correct size but not identical to the original. Run times are very slow due to most of the CPU time spent in the kernel (up to 90%) as reported by timer 3.01.

-mp used PPMD var. J, but allows more memory (up to about 1800 MB). The original program was limited to 256 MB. The optimal orders are different for enwik8 and enwik9. Higher orders help compression, but lower orders save memory on larger files. The maximum order is -o16. Higher values have no effect. Decompression is slow due to 55% of the CPU time spent in the kernel. Normally this is around 1% and decompression speed would be the same as compression.

-msl and -msm ignore the -M option and use 1 MB memory, resulting in poor compression.

-mw (experimental BWT) is the only option that uses both cores. All others result in 50% CPU usage on a 2 core processor. The -M option actually selects the block size, not total memory usage. Memory usage is 5x block size if one core is used, or 10x if both are used. Both are used only if enough memory is available. The default is to split the file in half and compress the two halves in parallel. However, better but slower compression can be obtained by using -M to select one block for the whole file. Maximum memory is 2 GB, even if more is available. For enwik9, -M320 selects 3 blocks, which are compressed in series on one core. For two cores, time reported is wall time. Process time for -mw -M320m is 187% of wall time for compression and 139% for decompression.

### .1749 epmopt | epm

epmopt + epm r9 is an experimental, closed source command line optimizer and file compressor by Serge Osnach, Oct. 16, 2003. It was intended for enc r16, but development on that project has stopped at enc r15, according to the web page (in Russian). The program has two parts: epm, a PPM compressor with text preprocessing, and epmopt, which attempts to optimize the parameters to epm by compressing repeatedly and varying the options one at a time until there is no more improvement. The input to epmopt may be different than epm, and supports optimization on sets of files matching patterns in specified sets of directories. The options to epm are memory limit, PPM order, and 20 undocumented options each specified by a single digit. The exact same options must be passed to the decompresser. In the results, I added 27 bytes to the compressed file sizes to account for this information. enwik9 was compressed and decompressed as follows:

The optimization data was enwik6, the first 106 bytes of the input file. epmopt compressed this about 100 times in 368 seconds with different options, making 35 passes through the list of 20 undocumented parameters, adjusting each one up or down one at a time. The fixed parameters were -m800 (800 MB memory limit) and PPM order 12 (–fixedorder:12, also the first 3 digits of the parameter string. Allowing epmopt to set the PPM order on a smaller training file will cause it to choose too large a value, hurting compression. I only tested orders 10, 12, and 20 on enwik8 and 12 gave the best compression). The -n20 option tells epm to tune all 20 parameters. The parameter string is written to the file enc.ini. The -m800 option need not be the same for epmopt and epm but must be the same for epm during compression and decompression.

Warning: epm failed to decompress correctly on enwik7 (first 107 bytes). In the output, some linefeeds were changed to spaces. This happened with all parameter combinations I tested including defaults: epm c enwik7 enwik7.epm. Decompression was bit-exact for enwik5, enwik6, enwik8 and enwik9.

### .1749 WinUDA

WinUDA 0.291 is a free, closed source GUI archiver by dwing, July 4, 2005. It uses context mixing and is derived from paq6. Mode 3 is the slowest (about 3x slower than mode 0) and uses the most memory, 194 MB.

### .1755 dark

dark v0.51 is a free, closed source archiver by Malyshev Dmitry Alexandrovich, Jan. 2, 2007. It uses BWT + distance coding without preprocessors. The -b333m option selects 333 MB blocks. -f (-f0 in 0.40 and 0.46, not supported in 0.32) forces no segmentation. Memory usage is 5 times the block size for compression (6x prior to v0.46).

opendark ver. A is an open source version of dark. The supplied Windows dark.exe crashed when decompressing enwik9 (size is 177,675,818). Decompression works up to -b127m. opendark does not support the -f option.

.1760 FreeArc

FreeArc 0.36 is a free, open source archiver by Bulat Ziganshin, Feb. 21, 2007. It incorporates 7 compression libraries - PPMd, GRZipII, LZMA (7zip), plus BCJ (7zip), REP (rzip-like), dynamic dictionary and LZP preprocessors. The option -m9 selects maximum compression (dict + LZP + PPMd for text files, REP+LZMA for binary). -lc1600000000 limits memory to 1.6 GB (same as -lc1600m). There is an option to use ppmonstr as an external compressor, which was not included in the test.

FreeArc 4.0 pre-4 is a free, open source archiver by Bulat Ziganshin, Dec. 16, 2007. It compresses using ppmd, GRZipII, and LZMA along with multimedia filters, a dictionary preprocessor and a REP preprocessor for removing repeating strings. It has Windows and Linux versions and an optional GUI.

ppmd generally gives the best compression for text. It will also call ppmonstr as an external program, but this mode was not tested, even though it compresses better.

For this test, the Windows command line version was tested. The option -mppmd:1012m:o13:r1 is equivalent to ppmd -m1012 -o13 -r1, selecting 1012 MB memory, order 13, and partial reinitialization of the model when memory is exhausted. Note that ppmd normally allows only up to -m256. This program was tested with 2 GB memory but values higher than -m1012 caused the program to crash during compression.

FreeArc 0.666 was released May 19, 2010. The 32 bit Windows console version was tested. -m9 selects maximum compression. There are many other compression options but these were not tested.

freearc 0.67a was released Mar. 15, 2014. Options -m1 to -m9 select the compression level from fastest to best. -m1x to -m9x select levels with fast decompression. Decompression was tested with the separate unarc.exe program.

.1766 hookhook v0.2 is a free, open source (GPL) command line file compressor by Nania Francesco Antonio, Jan. 8, 2007. It uses DMC: a state machine in which each state represents a bitwise context. Each state has 2 outgoing transitions corresponding to next bits 0 and 1, and a count n0 or n1 associated with each transition. Bit y (0 or 1) is compressed by arithmetic coding with probability ny/(n0+n1) (where ny is n0 or n1 according to y), and then ny is incremented.

After each input bit, the next state represents a context obtained by appending that bit on the right and possibly dropping bits on the left. States are cloned (copied) whenever the incoming and outgoing counts exceed certain limits. This has the effect of creating a new context in which no bits are dropped. In the example below, the state representing context 110 (dropping 2 bits from the previous context) is cloned by creating a new state 11110 because the incoming 0 transition count (ny for y=0) from state 1111 exceeded a limit. The new context is longer because it does not drop any bits. This transition is moved to point to the new state. Other incoming transitions (not shown) remain pointing to the original state. The outgoing transitions are copied. The counts of the original state are distributed to the new state in proportion to the moved transition’s contribution to those counts, which is w = ny/(n0+n1).

Normally, the initial set of contexts begin on byte boundaries. The cloning mechanism ensures that new contexts also have this property.

In hook v0.2, the counts are 32 bit floating point numbers initialized to 0.1. The initial state machine has 256*255 states representing bytewise order 1 contexts with uniform statistics. When memory is exhausted, the model is discarded and the state machine is reinitialized. A new state is cloned when ny > limit and n0+n1-ny > length, where limit and length are parameters. The optimal parameters for enwik8 and enwik9 are “c 7 2 6”, c means compress, 7 selects the maximum of 1 GB memory (64M states at 16 bytes each, minimum is 8 MB memory), 2 is the limit (range 1 to 7), and 6 selects a length of 32 (possible values are 1, 2, 3, 4, 8, 16, 32, 64). Larger lengths are better for large files because they conserve memory at the expense of compression.

hook v0.3 (Jan. 11, 2007) allows up to 1.8 GB memory (first option = 9) and uses double precision predictions in the 32 bit arithmetic coder.

hook v0.3a (Jan. 12, 2007) initializes the counts to 0.125 (instead of 0.1) and uses 24 bit precision in the arithmetic coder (instead of 32 bit).

hook v0.4 (Jan. 15, 2007) initializes counts to 0.1. Argument 2 selects length 3 (not 2).

hook v0.5b (Jan. 22, 2007) adds an LZP preprocessor. If the next byte to be coded is the same as the byte that occurred in the last matching 3 byte context, then this is indicated by coding a flag bit in an order 3 model (32 MB memory), and a match length coded by DMC with a fixed size of 128 MB. If there is no match, then the literal byte is coded by another variable sized DMC model. The parameters “c 1600000000 2 64 1 6” select compression (c), 1.6 GB for the DMC literal model (1600000000), a limit of 2 (minimum count for the cloned state), length of 64 (minimum remaining count for the state to be cloned), LZP selected (1), and a minimum match length of 6.

hook v0.6 (Feb. 7, 2007) removes the “length” parameter (effectively infinite). The arguments “c 1600 4 1 6” mean to compress (c), use 1600 MB memory, set the “limit” parameter to 4, turn on LZP preprocessing (1) with a minimum match length of 6. The “limit” parameter is the minimum count for an outbound DMC state transition to clone the state. Limit was tuned on enwik8.

hook v0.6b (Feb. 8, 2007) includes support for files up to 264 bytes (compiled by Ilia Muraviev. Earlier versions were compiled with MinGW g++ 3.4.5 by Matt Mahoney.) “limit” was tuned on both enwik8 and enwik9. Higher values conserve memory at the expense of compression on smaller files.

hook v0.6c (Feb. 14, 2007) stores the input filename in the compressed file and uses it during decompression.

hook v0.7 (Mar. 10, 2007) uses 325 MB more memory than advertised so it was tested with a lower option.

hook v0.7b (Mar. 12, 2007) reduces the excess memory to 94 MB.

hook v0.8 was released Mar. 17, 2007. Some additional results on enwik9 decreasing the rate at which the state machine fills up and is flushed:

hook v0.8b (Mar. 18, 2007) has some LZP improvements.

hook v0.8c (Mar. 19, 2007) is a minor bug fix. Compressed sizes are 1 byte larger than v0.8b.

hook v0.8d was released Mar. 21, 2007.

hook v0.8e was released Mar. 27, 2007.

hook v0.9 (Apr. 6, 2007) is closed source. It requires a processor that supports SSE instructions. It has some speed improvements and a E8/E9 filter for improved compression of .exe files. Memory usage is the second argument + 60MB.

freehook 0.2 is an open source port of hook v0.8e from C++ to C by Eugene Ortmann, Apr. 7, 2007. The supplied .exe file requires SSE instructions (Pentium 3 or higher), but the source can be recompiled for other processors.

hook v0.9b (Apr 10, 2007) replaces floating point arithmetic with integer arithmetic, so that archives are compatible across different processors. Note: I reduced the memory setting from 1800 to 1700 to prevent disk thrashing, which was a problem in earlier tests. I will do this from now on. This hurts enwik9 compression (but not enwik8) slightly, from 180,444,546 to 180,582,601. Actual memory usage is 60 MB over.

freehook 0.3 (Apr 10, 2007) has only very minor changes from 0.2 but is slightly faster due to different g++ compiler options. Compression is the same as 0.2. Memory usage is about 160 MB over.

hook v0.9c (May 8, 2007) has some speed improvements in the arithmetic coder. It compresses the same size as v0.9b.

hook v1.0 (Sept. 20, 2007) is closed source. The only option is memory size in MB.

The zip file linked above contains all versions (C++ source and Win32 .exe).

hook 1.1 (Nov. 13, 2007) improves BMP and WAV compression.

hook 1.3 was released Dec. 14, 2007, modified Dec. 15, 2007.

hook 1.4 was released Apr. 29, 2009.

.1789 7zip

7zip 4.42 is an open source GUI and command line archiver by Igor Pavlov, May 14, 2006. It compresses to 7z, zip, gzip, ppmd.H and tar format, optionally encrypts with AES, and will uncompress several other formats.

7z is the default format. It uses LZMA compression, a variation of LZ77. The option -mx=9 selects ultra (maximum) compression in this mode. The option -sfx7zCon.sfx creates a console-based self extracting executable by prepending a 131,584 byte decompresser. This is slightly smaller than the Windows GUI version (132,096 bytes) and much smaller than the decompression program itself as a zipped self extracting download (817,795 bytes). The best compression is with ppmd. The options are -m0=ppmd:mem=768m:o=10 equivalent to ppmd var H (with minor changes) order 10 with 768 MB memory. 7zip 4.46a was announced May 21, 2007. (The improved compression is due to testing with more memory).

7zip 9.04a was released Dec. 3, 2009. It gave an out of memory error with mem=1630.

7zip 9.20 was released Nov. 18, 2010. Default (LZMA) mode was tested. It uses 196 MB for compression using 75% of 2 cores, and 18 MB for decompression on a 2.0 GHz T3200 under Windows.

The following include the best known option combinations for 7zip on enwik8 in ppmd (PPM), 7z (LZMA), bzip2 (BWT) and zip (LZ77) formats.

.1789 rings

rings 0.1 is a free, closed source, experimental file compressor by Nania Francesco Antonio, Sept. 21, 2007. It uses LZP with order-2 coding of literals and arithmetic coding. It takes no command line options.

rings 0.2 (Nov. 16, 2007) includes improved BMP, WAV, TIFF, and PGM filters.

rings 0.3 was released Dec. 21, 2007.

rings 1.0 was released Feb. 8, 2008. It uses 50 MB for compression and 43 MB for decompression.

rings 1.1 was released Feb. 13, 2008 with same memory usage. It uses CM with LZP preprocessing for faster compression.

rings 1.2 was released Mar. 4, 2008 with the same memory usage.

rings 1.3 was released Apr. 2, 2008. It uses 54 MB for compression and 47 MB for decompression.

rings 1.4c was released Apr. 14, 2008. It has an option (1-9) which selects memory usage. Each increment doubles usage. Memory usage and run time are greater for decompression than compression. For option 9, compression uses 526 MB and decompression uses 789 MB. The program uses BWT. The transformed data is encoded using MTF (move to front), pre-Huffman coding followed by arithmetic coding.

rings 1.5 was released Apr. 21, 2008. It improves compression and is symmetric with regard to memory usage. Options are like 1.4c. The table below compares timing results on my old and new computers.

rings 1.6 was released Aug. 16, 2009. The option ranges from 1 to 10, where 10 uses the most memory. It includes a Linux version (18,348 bytes zipped) which was not tested.

rings 2.0 (discussion) is a multi-threaded archiver rather than a file compressor. It uses BWT. It has an interface similar to zcm. Option -m7 selects maximum block size of 100 MB using 500 MB memory per thread. Option -t1 or -t2 selects 1 or 2 threads. On a 2 core machine, selecting 2 threads shows 3 processes in Windows Task Manager, two of which use 500 MB memory and I/O dividing the input and output files, and one process using 7 MB with several GB of input and a lot of kernel CPU time. These 3 processes must share 2 cores. As a result, it runs slower than 1 thread.

rings 2.1 (discussion) was released May 23, 2015.

rings 2.2 was released May 28, 2015. -o option enables multi-threaded compression.

rings 2.5 was released June 6, 2015. Option -o was removed. The 64 bit version was tested.

.1803 pimple2

pimple 1.43 beta is a free, closed source GUI archiver by Ilia Muraviev, Apr. 24, 2006. It uses context mixing.

pimple2 is a command line file compressor, June 11, 2007.

.1807 ash

ash 04a is a free, experimental command line file compressor by Eugene D. Shelwien, Dec. 5, 2003. The /m700 option selects 700 MB memory limit. (/m800 causes disk thrashing with 1 GB). /o10 selects model order 9. This gives good results on smaller files when memory is constrained, but I did not try to optimize it. There is a /s option to select SSE depth that gives good results for the default value of /s5 so I did not try to optimize it either. Other results:

Note: the acutal memory usage (commit charge) for enwik9 /m700 /o8 was 1910 MB at the end of compression, minus 257 MB for other programs, according to Windows task manager. This is generally not a problem if your swap file is large enough. It appears to be a slow memory leak (recovered when program exits) and does not cause thrashing.

ash /m1700 /o10 and /o12 failed to compress enwik9 with 2 GB memory (error: could not allocate a block). enwik8 compressed to 19,713,239 using /o10 and 19,446,859 using /o12.

### .1807 bce3

bce3 is a free, open source (Apache), experimental file compressor by Christoph Diegelmann, Mar. 16, 2015. It uses an order-n bitwise context model where the model is computed using BWT and encoded and transmitted to the decoder. Memory usage is 5 times the file size. The program takes no options. I tested by compiling with g++ 4.8.3 in Ubuntu Linux.

.1823 ocamydocamyd 1.65.final is a free, open source command line file compressor by Frank Schwellinger, May 25, 2006. It uses DMC. The -s0 selects slowest (maximum) compression. The -m8 option selects 800 MB memory (maximum is -m9 = 900 MB).

ocamyd LTCB 1.0 is a modification by Mauro Vezzosi on June 20, 2006 of Frank Schwellinger’s ocamyd-1.65-final. The option -s0 selects maximum compression. -m3 selects 300 MB memory (the maximum for the test machine), but it supports up to -m8.

ocamyd 1.66.final, by Frank Schwellinger, Feb. 1, 2007, includes the -f option to prevent flushing and rebuilding the DMC model when memory is exhausted.

The following table shows the effect of the -s and -m options on ocamyd 1.65.final on enwik8. Times are in ns/byte, process (kernel+user) time by timer 3.01, ~ indicates global (wall) time.

.1824 bee

bee 0.78 build 0154 is an open source (Delphi Object Pascal) command line archiver (with optional GUI) by Andrew Filinsky and Melchiorre Caruso, Sept. 23, 2005. It uses PPM. The -m3 option select maximum compression (default is -m1). The -d8 option selects 512 MB memory, the maximum that does not cause disk thrashing (default is -d2 = 10 MB).

bee includes beeopt, a parameter optimizer similar to epmopt. This was not tested. bee comes preconfigured with parameters trained on .txt and .xml files (and other types) in file bee.ini. This was tested by renaming enwik7 (first 107 bytes) to enwik7.txt and enwik7.xml but compression was worse. The executable size is a zip archive containing bee.exe and bee.ini. This is much smaller than the zipped source code download.

### .1829 uhbc

uhbc 1.0 is an experimental, closed source command line file compressor by Uwe Herklotz, June 30, 2003. It uses BWT. The -b100m option selects 100 MB block size, which requires 800 MB for compression and 500 MB for decompression. -m3 selects maximum compression for the entropy coding stage, which consists of run length coding (RLE) + DWFC (double weighted frequency counting) + entropy coding. WFC is described in Deorowicz, S., Improvements to Burrows–Wheeler compression algorithm, Software–Practice and Experience, 2000; 30(13):1465–1483.

Additional results on enwik8:

.1831 smac

smac v1.8 (discussion) is a free, experimental file compressor for Windows by Jean-Marie Barone, Jan. 22, 2013. It uses an order-4 bitwise context model and arithmetic coding. It takes no options. Source code is in x86 assembler.

smac v1.9, Jan. 31, 2013, uses an order 4 and order 6 context model and chooses at each bit the model whose prediction is further away from 1/2.

smac v1.10, Feb. 7, 2013, uses a nonstationary model like PAQ6. When a bit count is incremented, half of the count over 2 of the other bit value is discarded.

smac v1.11, Feb. 18, 2013, switches between order 6, 4, and 3 context models depending on which prediction is furthest away from 1/2. For files smaller than 5 MB, it switches between lower order contexts.

smac v1.12a, Mar. 11, 2013, uses indirect context models. The context is mapped to a 16 bit state representing the number of 0 and 1 bits as 7 bit counters, plus the last 2 bits. When the counters reach the maximum value of 127, they are both halved and incremented. v1.12a is a speed improvement over v1.12 (released the day before) using prefetch instructions.

smac v1.13, Mar. 22, 2013, mixes the order 6, 4, and 3 indirect context models in the logistic domain, log(p(1)/p(0)). Each prediction has a fixed weight of 1/3.

smac v1.14, Apr. 20, 2013, uses adaptive mixer weight update with a learning rate of 0.002.

smac v1.15, May 19, 2013, uses an order 6-4-3-2-1 context mixing algorithm.

smac v1.16, July 30, 2013, has improvements to the context bit history model and match model.

smac 1.17 (discussion), Nov. 1, 2013, has some speed optimizations and small changes in the bit history counter rounding and use of floating point lookup tables.

smac 1.17a (discussion), Nov. 17, 2013, has some speed improvements with no change in compression.

smac 1.18 (discussion), Dec. 8, 2013, uses a polynomial function to compute squash() to improve speed.

smac 1.19 (discussion), Dec. 17, 2013, has a speed optimization of the squash function.

smac 1.20, Jan. 16, 2014, improves modeling of 0 frequency counts using a Laplace estimator, p=(n0+1)/(n0+n1+2).

### .1839 ppmd

See ppmonstr (above)..1849 tc

TC 5.2 dev 2 is an experimental command line file compressor, currently under development by Ilia Muraviev. It takes no options.

5.0 Dev 1 uses LZP. Dev 4 includes an improved hash table to conserve memory and a faster range coder compared to dev. 2, but compression is the same. Starting with 5.0 dev 6, LZP literals and match lengths are encoded using PPMC (PPM with fixed escape probabilities to lower orders). Dev 7 and 9 use order 3-1-0 PPMC.

tc 5.0 dev 11 (July 24, 2006) is the last of this series.

tc 5.1 dev 1 uses ROLZ (reduced offset LZ) with PPM order 1-0 for literals, offset set reduced with order 2 context, and a 16 MB dictionary.

tc 5.1 dev 2 has improved parsing and is archive compatible with dev 1.

tc 5.1 dev 5 uses ROLZ plus context mixing (instead of PPM) for order 2 literals.

tc 5.1 dev 7 uses improved parsing (flexible parsing) and adds SSE.

tc 5.1 dev 7x uses a larger dictionary.

tc 5.2 dev 2 uses FPW (fast PAQ weighting).

### .1857 bwtsdc

bwtsdc v1 (discussion) is a free, experimental file compressor with source code by David A. Scott and Yuta Mori. It takes no options. Memory usage is 5 times the file size. The program is bijective, meaning that any file is valid input to the decompresser, and no two inputs will decompress to the same file. In other words, there is an exact 1 to 1 mapping between uncompressed files and compressed files. The compressor uses multiple stages, each of which is bijective. The first stage is a BWT variant called BWTS (BWT Scottified) developed by Scott. In this variation, it is not necessary to store the starting point for the inverse BWT. This is achieved by dividing the input into a lexicographically nonincreasing sequence of Lyndon words. A Lyndon word is any subsequence that lexicographically precedes any of its rotations. The block is then sorted using contexts that wrap within Lyndon words rather than the whole block. The BWTS is followed by distance coding (DC, developed in part by Mori), and Fibonacci coding, where each stage is also bijective. The compressor is implemented as 3 programs called from a .bat file.

### .1859 fbc

fbc v1.0 is a free, experimental file compressor for Windows by David Catt, Feb. 29, 2012. It is described as using BWT (divsufsort) with a fast adapting (rate 1/16) 14 bit context model consisting of an 11 bit history and 3 bits to encode the position in the current byte. The input is preprocessed using Eugene Shelwein’s alphabet reordering preprocessor, BWT_reorder_v2. The argument 250000000 selects the block size in bytes. Memory usage if 5 x block size.

fbc v1.1, Mar. 2, 2012, fixes a memory allocation bug that caused decompression to fail for a block size of 333 MB. It automatically selects between 32 and 64 bit versions of divsufsort. Results are shown for the 64 bit version.

.1862 ppmvc

ppmvc v1.1 is a free, command line file compressor by Przemysław Skibiński, May 12, 2006, based on PPMd var. J by Dmitry Shkarin. It uses variable length contexts as described in the paper, P. Skibinski and Sz. Grabowski. Variable-length contexts for PPM. Proceedings of the IEEE Data Compression Conference (DCC04), pp. 409-418, 2004. Long matching strings are encoded as in high order ROLZ, encoded as an index to a matching context and a length.

The command line options are the same as in PPMd: -o8 selects order 8, -m256 selects 256 MB memory, -r1 partially rebuilds the model when memory is exhausted. I tuned the compressor to -o8 on enwik8. There are additional options related to VC compression (which must be specified during decompression), but I used the defaults since there is no guidance on how to set them in the program documentation. The paper suggests that the best values (and defaults) are to encode matches of context length order+1 with a minimum match length of 2*order, searching the last 8 to 16 contexts for the longest match. The effect is usually greatest for low order PPM.

### .1869 chile

chile 0.3d-1 is a free, command line file compressor as C source code by Alexandru Mosoi, May 29, 2006. It uses BWT. The option -b40000 selects a block size of 40000 KB, which requires about 785 MB of memory for compression and 240 MB for decompression. Version 0.3d1 is identical to version 0.3d except that the maximum block size was increased from 2048 KB to 99999 KB. For this test the program was compiled for Windows using MinGW 3.4.2 as specified in the Makefile.

chile 0.4 (Jan. 27, 2007) introduces a faster algorithm for building suffix arrays that uses less memory (7N). The option -b=244141 selects the block size in Kb (to split enwik9 in 4 equal parts). It was compiled using MinGW gcc 3.4.5 with options -W -Wall -fomit-frame-pointer -g -O3 and tested in WinXP Home with 2 GB memory.

.1901 bwtdiskbwtdisk 0.9.0, is a free, experimental, open source (GPL v3) file compressor by Giovanni Manzini, July 7, 2010. It uses BWT. Its purpose is to test the techniques for low memory BWT described in the paper Lightweight Data Indexing and Compression in External Memory by Ferrangina, Gagie, and Manzini, Proc. LATIN 2010. The forward BWT computes the suffix array in small segments, then makes multiple passes over the BWT output to merge the result. The external disk usage can be further reduced by compressing the input first with zlib or lzma and decompressing the input on each pass. The program is single threaded.

The program is supplied as source code only. It was compiled with g++ 4.6.3 using the supplied Makefile in Ubuntu on a Core i7 M620, 4 GB. There are two programs, the compressor “bwte” and decompresser “unbwti”. The compressor computes a low memory BWT using at most the memory specified by the -m option (in MB). The -b option specifies how the BWT transformed input is to be compressed. -b 1 specifies zlib, -b 4 specifies lzma, and -b 2 specifies run length coding and range coding. There is no block size parameter. The input is compressed in a single block. Decompression requires 4 times the file size in memory, which used all of the test machine for enwik9 so was tested for enwik8 only. Compression of enwik9 with -b 4 failed (cannot create pipe).

.1910 CTXf

CTXf 0.75 pre-beta 1 is a free, closed source command line archiver by Nikita Lesnikov, Sept. 20, 2003. It uses PPM with preprocessing for text, exe and multimedia files. The option -me selects extreme (best) compression. It uses about 78 MB memory in Windows task manager.

### .1912 M03exp

m03exp-2005-01-27 is an experimental, closed source GUI file compressor by mij4x, Jan. 27, 2005. It uses BWT implementing the M03 algorithm by Michael A Maniscalco. with a maximum block size of 8MB. (Note on the GUI: to compress or decompress, drop a file on the program window. Right click to select options). m03exp-2005-02-15 (Feb. 15, 2005) supports blocks up to 32MB but is otherwise identical.

.1930 Stuffit

Stuffit 9.0 is a commercial GUI archiver by Allume Systems, now Smith Micro. This was the current version as of May, 2006. Note: their free 30 day trial required registration and a credit card number which was charged if you forgot to cancel. The options tested were:

Stuffit X: Method 4 - Best Text Compression, Level 16, Memory 25 (36.1MB), Optimizers On, Block mode On, Redundancy Off, Text Encoding None, Encrypt archive disabled, Segment archive disabled.

Stuffit X: Method 6 - Auto-picks the best method, Level 25, Memory 25 (68.6MB), Optimizers On, Block mode On, Redundancy Off, Text Encoding None, Encrypt archive disabled, Segment archive disabled.

Stuffit 12.0.0.17 (compression technology version 12.0.0.21) was released Jan. 31, 2008. It includes lossless compression of JPEG and MP3 files and lossy recompression of zip archives, GIF, TIFF, PNG, and PDF files. It supports a native SITX format as well as zip, gzip, rar, bzip2, compress, tar, cab, and some more obscure formats. It is multithreaded for multicore support, although I tested it on a single core processor. I only tested the native general-purpose formats. For these tests, I used the command line programs console_stuff.exe and console_unstuff.exe to reduce the executable size and measure run time more accurately. The options are -m=1 (LZ77-Huffman), -m=2 (LZ77-arithmetic), -m=4 (PPM), -m=8 (BWT), -l (level 2-16, higher is slower but better), -x (memory extents, max 30, higher uses more memory). The best compression for text is -m=4 (PPM) with maximum memory -x=30. (In the GUI but not the command line, above 29 causes an out of memory error with 2 GB RAM). The -l option apparently has no effect on PPM. The decompresser size is based on console_unstuff.exe and the minumum set of 5 .dll files needed to run it (4 common plus Plugins/sitx.dll). The full GUI installer (without Office plugins) zips to 17,051,856 bytes. The tested version was a complimentary copy provided by the company.

Stuffit 2009 13.0.0.19 (compression technology 13.0.0.24) was released Dec. 19, 2008. I tested as with Stuffit 12, however the technique of finding the minimal set of .dll files that I used in Stuffit 12 did not work (internal error) so I had to include the zipped distribution size (StuffIt2009.exe), which includes many other compression formats and a GUI. The tested version was a complimentary copy provided by the company.

.1933 plzma

plzma_v3b ( discussion) is a free, closed source, experimental file compressor for Windows (32 and 64 bit versions) by Eugene Shelwien, Oct. 8, 2011. It uses LZMA (7zip equivalent) with a modified entropy encoder. plzma_v3c was released Mar. 19, 2012. Options are as follows:

• e or c0 - compress using LZMA back end.
• c or c1 - compress using back end optimized for maximumcompression.com SFC.
• c2 - compress using back end optimized for enwik8.
• 1000000000 - LZ window size or log size (default 25 or 33554432). Uses 11.5x memory.
• 999999999 - matchFinder iteration limit, default 9999, max 232 - 1.
• 273 - match length threshold for greedy parsing (default).
• 8 - prefix bits in literal context (default 2).
• 0 - position bits in literal context (default).
• 0 - position bits in id/len contexts (default).
• 6000 - kNumOpts (default 4096).
• 1 - matchStep (default 128).
• 1 - alignStep (default 16).
• 1 - lenStep (default 272).
• 7 - f_lenloop (default 0).

.1933 crook

crook v0.1 (discussion) is a free, open source file compressor by Jüri Valdmann, Mar. 5, 2012. It uses bit-level PPM. Because it predicts bits rather than bytes, there is no escape modeling. This is like DMC in that each bit-level context is mapped to a next-bit prediction and a count (equvalent to two counts of zeros and ones). But unlike DMC, it avoids the problem of duplicate states representing the same contexts, which would dilute the statistics and waste memory.

Bits are modeled MSB first. Contexts are stored in a binary tree where the two child nodes represent the current context extended by one bit on the right. Each node also has a pointer to a suffix node, representing the current context shortened by one byte on the left. Contexts always begin on byte boundaries. Each context maps to a 22 bit prediction for the next bit (initialized to 0.5) and a count. When a bit is coded, the current node and all of its suffix nodes are updated by adjusting the prediction to reduce the error by 1/count and the count is incremented by 1 up to a limit of 32. The initial tree is bytewise order 0 (255 contexts) with initial counts of 12. Subsequent nodes are added with a count of 1.5 and a prediction inherited from its suffix node whenever there is no node to represent the 1 bit extension, and the new node becomes the current context.

The option -m1600 limits memory usage to 1600 MiB. When memory is exhausted, no new nodes are added to the tree, but predictions and counts of existing nodes continue to be updated. The current context then becomes the suffix node if needed. The option -O8 limits the tree depth to bytewise order 8 (found to be optimal for both enwik8 and enwik9). When the current node reaches this depth, no child nodes are added, but existing nodes and their suffixes continue to be updated, just as if the memory limit were reached. Increasing the model order improves compression but also causes the tree to grow faster, which sometimes makes compression worse if the memory limit is reached sooner. The defaults are -m128 -O4.

Compression and decompression require the same time and memory. Also, the same compression options must be given again during decompression. (I added 10 bytes to the decompresser size to account for this). The compressed file is arithmetic coded with the original file size saved in the first 4 bytes. File sizes are limited to less than 2 GiB. The program is distributed as source code only. To test, I compiled with g++ 4.6.1 in 32 bit Windows using the options recommended in the source comments.

.1936 ppmx

ppmx 0.01 is a free, experimental, closed source file compressor by Ilia Muraviev, released Nov. 25, 2008. It uses PPM with no filters. It takes no options.

ppmx 0.02 was released Dec. 2, 2008. It uses order 9 PPM with hashed context tables, as discussed here. There is also a core 2 duo version which is faster, although it runs on only one core, and has a slightly larger executable. Note that the table below is misleading because on enwik8 the regular version compressed at 976 ns/byte (12% longer) and decompressed at 992 ns/byte (4.5% longer) than the core 2 duo version.

ppmx 0.03 (discussed here) was released Dec. 22, 2008.

ppmx 0.04 (discussed here) was released Jan. 5, 2008. It uses order 12-5-3-2-1-0 PPM and 280 MB.

ppmx 0.05 (discussion), Jan 19, 2010, adds SEE (secondary escape estimation), more memory, and some optimizations.

ppmx 0.06, released July 27, 2010, is designed for improved speed and less memory usage rather than compression ratio. It removes SEE and uses only a fixed order 4-2-1-0 model with hash tables. It has a P4 version for Pentium-4 and higher that is about 12% faster. This is the version tested. It has a larger executable (54,496 vs. 45,216).

ppmx 0.07, Feb. 20, 2011, uses order 5-3-2-1-0-(-1) PPM with hash tables. Memory usage is increased to 302 MB.

ppmx v0.08 (discussion), Jan. 1, 2012, uses order 6-4-2-1-0-(-1) PPM with hash tables and SEE improvements.

ppmx 0.09 (discussion) was released Mar. 24, 2014.

.1947 lzturbolzturbo 0.01 is a free, experimental, closed source file compressor by Hamid Bouzidi, Aug. 15, 2007. There is some controversy over the origin of the source code. Discussion. Discussion.

It uses LZ77 with arithmetic coding. The option -49 selects method 4 (1, 2, 4) and level 9 (1..9) for best compression. Other combinations were not tested. There is also a Linux version which was not tested. Memory usage fluxuates but peaks at 654 MB for compression and 90 MB for decompression. The Windows version produces read-only output files that must be set with “attrib -r” before they can be modified or deleted.

lzturbo 0.1 (Oct. 5, 2007) is threaded for parallel execution on multicore machines. The maximum comprssion level is -59 where it uses 248 MB for compression and a peak of 72 MB for decompression. Other modes compress much faster. The read-only bug was fixed.

lzturbo 0.9 was released Feb. 25, 2008. Decompression memory peaks at 79 MB.

lzturbo 0.94 was released Apr. 11, 2009. The option -b59 selects method 5, compression level 9 for maximum compression. -b100 selects a block size of 100 MB for independent compression in separate threads. The default is 32 MB. -p0 forces the compressor to run on one core. By default the program runs on on all cores, but this causes the program to run out of memory with -59 because each thread uses 1450 MB. Decompression ran on 2 cores with a process time of 20 seconds per core and wall time of 28 seconds using about 300 MB memory. Faster modes tested below are run on 2 cores with average process time per core shown.

lzturbo 1.1, Apr. 29, 2013, runs only on 64 bit Windows and 64 bit Linux. The Linux version was tested under Ubuntu (note 48) using the non-static (smaller) executable. The 2 digit options -11…-49 select the compression method and level. The first digit can be 1..4 with higher numbers compressing better. The second digit can be 0, 1, 2, or 9 with higher numbers compressing slower without affecting decompression speed. The program gave an error during compression with -40, -41, -42.

Option -b1000 selects a block size of 1000 MB. The default is -b24. Separate blocks can be compressed and decompressed in parallel. The test machine automatically selects 4 threads. Larger blocks improve compression but use more memory and allow fewer threads to be allocated. -b1000 causes it to use 1 thread since there is a single block. At level 9 (-19, -29, -39, -49), it is not possible to compress enwik9 with -b1000 on the 4 GB test machine because it will use over 6 GB memory and start disk thrashing. -p1 selects 1 thread. -p0 disables multi-threading.

lzturbo 1.2 was released Aug. 7, 2014 with updates on Aug. 10 and 11, 2014, with compression ratio and decompression speed improvements. Methods -30, -31, -32, -39 use ANS (Asymmetric numeric system) encoding instead of arithmetic coding with SSE/AVX code selected at run tim. The updates fixed an “illegal instruction” error during compression in these modes on the test machine and some other processors. The other modes were tested on the Aug. 7 release. Options are like v1.1.

.1956 encenc 0.15 is an experimental, closed source command line archiver by Serge Osnach, Feb. 14, 2003. It uses PPM and CM (in PaQ mode). It tries up to 5 different compression methods (depending on options) and chooses the best one. The methods are (“a” means “add to archive”):

• ae = PPMEnch, default is order 7, -o4 to -o64 overrides, -d selects dictionary size in MB to -d256 (uses 344 MB). Choosing higher than -d127 causes decompresser to either output garbage or crash.
• ai = PPMd var. I, -o selects PPM order, above -o18 crashes compressor, -d has no effect, uses 18 MB.
• aq = PaQ, -o and -d have no effect, uses 50 MB.
• ab = PPMBin, default order 15, -o overrides, -d selects dictionary size and crashes decompresser as with method ae.
• ao = PPMEnch with fixed settings. -o and -d have no effect but using -d crashes decompresser. Uses 31 MB.
• ag = try all 5 methods and select the best compression. Subsets (e.g. “aeqo” = ae, aq, ao) are allowed.

Methods ae and ab with options -o8 -d256 were found to give the best compression on enwik7 (first 107 bytes). These methods discard the model when the memory limit is reached, and this was observed to happen (in task manager), so these options should hold for larger files. However with -d127 (necessary to decompress), method aq gives the best compression..1966 comprolz

comprolz 0.1.0 (discussion) is a free, open source, experimental file compressor by Zhang Li, Oct. 7, 2012. It uses ROLZ. The option -b256 selects the maximum block size. During compression it uses 60-65% of two cores. Decompression uses one core.

Only source code was provided. It was compiled for 32 bit Windows Vista using MinGW 4.6.1 using “gcc -O3 *.c”.

comprolz 0.2.0 was released Oct. 16, 2012. It includes the -f option to select flexible parsing. It is slower but compresses better.

comprolz 0.10.0 (discussion) was released Nov. 25, 2012. It includes a dictionary derived from the first 10 MB of enwik8. To test, it was compiled as suggested in the documents using gcc 4.7.0 with options “-O3 -fomit-frame-pointer -mno-ms-bitfields”. Source code is shared with comprox 0.10.0. The executable, packed with UPX, is smaller.

comprolz 0.11.0 was released Dec. 17, 2012. The program builds a dictionary from the input instead of using a static dictionary. 32 bit executables are included for Windows and Linux. The Windows version was tested.

comprolz 0.11.0-bugfix1, Dec. 18, 2012, fixes a bug that caused poor compression.

.1971 sbc

sbc 0.970r2 is a free, closed source command line archiver and file encryptor by Sami, June 27 2005. Compression options suggest it uses BWT. The -m3 option selects maximum compression, requiring 32 MB memory (-m1 is minimum). The -b63 option selects maximum block size (32 MB, requiring 192 MB additional memory). -ad disables adaptive block size reduction for homogeneous data. SBC runs faster with smaller block sizes and minimum compression as shown:

.1973 xz

xz 5.0.1 is a free, open source file compressor, Jan. 29, 2011. xz specifies a container format written by Lasse Collin. It uses the public domain LZMA2 compressed format from 7zip by Igor Pavlov. There are versions for most operating systems including Windows and Linux. The Windows version was tested. The option -9 specifies maximum compression and memory. The default is -6. The option -e (extreme) specifies better compression at a cost in compression (but not decompression) time.

Program size is based on xz.exe. There is a separate decompressor (xzdec.exe) which is smaller and decompresses to standard output, but the Windows version does not work because it outputs in text mode. Additional results are shown below for enwik8 for compression and decompression time (ns/byte) and compression and decompression memory (in MB).

xz 5.2.1 was released Feb. 26, 2015.

.1984 WinRARWinRAR 3.60 beta 3 is a commercial (free trial) Windows GUI and command line archiver by Eugene Roshal, May 8, 2006. It produces rar and zip archives and decompresses many other formats. It also encrypts and performs other functions. The best compression mode uses PPM (actually ppmd var. I, an earlier version of ppmd J) with optimizations for text and other formats (exe, wav, bmp). The -mc7:128t+ option says to use PPM order 7, 128 MB memory (maximum) and force text preprocessing. The -sfxWinCon.sfx option says to produce a self extracting console executable (adding 79,360 bytes).

The model order was tuned on enwik8. Additional results are shown for order 10, for -m5 (maximum compression), and for normal compression as a .exe and .rar file. The decompresser in the last case is zipped unrar.exe.

WinRAR 4.20 was released June 9, 2012. It costs \$29 with a 40 day free trial as of Feb. 1, 2013. Options are the same. -m1 through -m5 select compression level. The default is -m3. The algorithm is LZ77 with a 4 MB window. -mc7:128t+ selects PPM, order 7, with maximum 128 MB memory. Time and memory to decompress with PPM is about the same as compression.

WinRAR 5.00b2 was released Apr. 29, 2013. It includes a larger dictionary, up to 1 GB for the 64 bit version and 256 MB for the 32 bit version. Option -ma5 selects the new archive format, which is not compatible with v4.20 or earlier. The default is the older format. In the newer format, option -mc is silently ignored. Option -m3 is the default compression level.

.1986 quarkquark v0.95r beta is a free, closed source command line file compressor by Frederic Bautista, Mar. 10, 2006. It uses LZ. It is characterized by high compression and fast decompression. The -m1 option selects relative mode compression, which is normally best, but slowest. The -d25 option selects a dictionary size of 225 which is the largest that will run without thrashing with 1 GB RAM. The -l8 option selects the search depth. Higher values normally improve compression (up to -l13, default -l4), but -l8 was the highest practical value for reasonable compression speed (7.5 hours). Also, larger values were found to hurt compression on enwik5. Compression time increases approximately exponentially with the -l value. The compression speed with -l13 is 6,100,000 ns/byte..1994 lzip

plzip is a free, open source file compressor by Antonio Diaz Diaz, Feb. 16, 2010. It is “parallel lzip”, compatible with lzip, but multi-threaded for parallel execution. It uses LZMA (LZ77 with arithmetic coding). The -9 option selects maximum compression. It has a command line interface similar to gzip. When it compresses, it removes the original file and adds a .lz extension.

lzip and plzip are written for Linux. A Windows port by Christian Schnaader on May 2, 2010 was tested. On my test computer (2 core T3200, 2 GHz), compression showed 180% CPU and decompression showed 117%.

lzip 1.14-rc3 was released Jan. 15, 2013.

plzip 1.5 was released June 2, 2016. I tested the 64 bit Windows compile in Linux.

.1995 comprox

comprox_sa 20110927 (discussion) is a free, experimental, open source file compressor by Zhang Li, Sept. 27, 2011. It uses LZSS (in 4 MB blocks) followed by arithmetic coding. The program takes no arguments. It uses 60 MB memory for compression and 6 MB for decompression. It runs in both Windows and Linux. Only the Windows version was tested.

Version 20110928 was released Sept. 28, 2011. Compression runs in 2 threads. Both the Windows and Linux versions were tested (on different computers).

Version 20110929 was released Sept. 29, 2011. Decompression also runs in 2 threads. Compression is slightly improved.

comprox version 0.1.1, Oct. 10, 2011, replaces comprox_sa. It is a rewrite using LZ77 (instead of LZSS) and arithmetic coding. It takes a compression level 0 (fastest) to 9 (best) with a default of 5. All levels use the same memory, 218 MB for compression and 44 MB for decompression. The Linux version reports the same resident memory as Windows but higher virtual memory: 236 MB to compress and 284 MB to decompress. Both compression and decompression run in 2 threads. Reported times are real times.

comprox 0.6.0 was released Aug. 24, 2012. It uses static 4K dictionary encoding followed by LZ77 and arithmetic coding. It was released as open source (3 clause BSD) C code only. For testing, it was compiled using g++ 4.6.1 as “gcc -O3 *.c” under 32 bit Windows. The option e200 means to use a 200 MiB block size. The default is e16. Larger blocks improve compression but use more memory. The program crashed with e250 or larger.

comprox 0.7.0 (discussion) was released Sept. 10, 2012. It includes multi-threaded compression and other improvements. It includes a static English dictionary with about 3000 common words. It was tested in 64 bit Linux compiled with “gcc -O3 *.c -lpthread” and in 32 bit Windows compiled with “gcc -O3 *.c -lpthread -Wl,–stack,8000000”.

comprox v0.8.0 was released Sept. 26, 2012 with better compression. The Linux version was compiled with “gcc -O3 -march=native *.c -lpthread”. The Windows version was compiled as before.

comprox 0.8.0-bugfix1, Sept. 27, 2012, fixed a bug that caused compression to crash on some input files. It was compiled with MinGW 4.6.1 with “gcc -O3 -msse2 -s -Wl,–stack,8000000 *.c -lpthread”.

comprox 0.9.0 was released Oct. 16, 2012. The -b option sets the block size in MB. Default is -b16. -m sets number of matches to check. Default is -m40. -f selects flexible parsing. To test, the program was compiled “gcc -O3 -march=native -s *.c” as above.

comprox 0.10.0 (discussion) was released Nov. 25, 2012. It includes a dictionary derived from the first 10 MB of enwik8. To test, it was compiled as suggested in the documents using gcc 4.7.0 with options “-O3 -fomit-frame-pointer -mno-ms-bitfields”. Source code is shared with comprolz 0.10.0. The executable, packed with UPX, is smaller.

comprox 0.11.0 was released Dec. 17, 2012. It builds a dictionary from the input rather than use a static dictionary. Executables are included for 32 bit Windows and Linux. These compressed smaller than the source code. The compressor crashed with -b250 (250 MB block size) on enwik9, but -b200 worked. -m100 selects the match search limit (default -m40). -f selects flexible parsing. Using large -m makes compression time nonlinear, i.e. increasing from 75s to 2115s from enwik8 to enwik9.

comprox 0.11.0-bugfix1, Dec. 18, 2012, fixes a bug that caused poor compression.

.2018 bsscbssc 0.95a is a free command line file compressor by Sergeo Sizikov, 2005. It uses BWT. The -m16383 option selects the maximum block size of 16383 KB (uses 140 MB memory)..2024 lzham

lzham alpha 2 is a free, open source (MIT license) file compressor and library by Richard Geldreich Jr., Aug. 21, 2010. LZHAM is short for LZMA-Huffman-Arithmetic-Markov. It is based on LZMA (7zip) but instead of using arithmetic coding throughout, it uses them only for binary decisions and uses Huffman or Polar codes for literal and match codes. A Polar code is similar to a Huffman code but is simpler to calculate at a cost of 0.1% in compression. Polar codes are calculated as follows:

1. Symbols are sorted from highest to lowest frequency.
2. The total frequency is rounded up to a power of 2.
3. Individual frequencies are rounded down to a power of 2.
4. Individual frequencies are doubled in descending order until the sum is equal.
5. Step 4 is repeated as needed.
6. At this point all codes have frequencies that are a power of 1/2 and codes are assigned.

For example, if the symbols and their frequencies are A=3, B=2, C=1, then the sum (6) is rounded up to 8 and the individual frequencies are rounded down to A=2, B=2, C=1, which sums to 5. We then double A=4, which sums to 7. We cannot double B=4 because the sum would exceed 8, so we continue to C. At this point we have A=4, B=2, C=2, which sums to 8, and we may assign codes of appropriate lengths such as A=0, B=10, C=11.

For this test, lzhamtest_x86 was used. There is a _x64 version for 64 bit machines which is faster. The library supports different speeds and dictionary sizes, but the test program does not have any options to select them, so none were used. Decompression uses 67 MB memory vs. 609 MB for compression. Compression uses both cores on the test machine but decompression uses only one.

Version alpha 3, Aug. 30, 2010, supports all of the options suppored by the library. Option -d26 selects 64M dictionay, the largest supported by the x86 version. (The x64 version supports up to -d29 = 512M). -m4 selects “uber” compression mode. There are 5 compression levels from -m0 through -m4. The highest two levels use Huffman codes rather than Polar codes. -t2 says to use 2 helper threads (to match the number of cores on the test machine). The default is to use 1 less than the number of cores, up to 16 threads. Decompression is not multi-threaded.

The x64 version was tested by the author. I guessed at memory usage. Each increment of the -d option approximately doubles memory usage.

lzhamtest v1.0 (discussion) is the test code for the source code release on Jan. 25, 2015. To test on note 48, it was compiled using “cmake . ; make” in Ubuntu. Option -d29 selects a 512 MB dictionary. -d26 selects 64 MB. Default is -d28 (256 MB). Option -x selects extreme parsing.

.2024 flashzipflashzip 0.1 is a free, closed source file compressor by Nania Francesco Antonio, Jan. 10, 2008. It uses LZP and arithmetic coding.

flashzip 0.2 was released Jan. 11, 2008. It is compatible with version 0.1 but faster. Note: in both versions, CPU utilization during compression is about 28% to 35%. Times shown are process times.

flashzip 0.3 was released Feb. 4, 2008. It uses ROLZ plus arithmetic coding. It takes an option x for better compression (slower) and 1 through 5, where 5 is the slowest (best compression).

flashzip 0.9 was released June 28, 2008. Option -m2 selects method 2 (default is -m1). -b1 through -b5 select buffer size, which affects memory usage. Default is -b3. -s1 through -s7 selects match length and speed. Default is -s1 (fastest, worst compression).

flashzip 0.91 was released Aug. 17, 2008. Options are like version 0.9. Memory usage was increased to 198 MB for compression and 138 MB for decompression using settings for best compression. Minimum requirement is 10 MB and 6 MB.

flashzip 0.93a was released Mar. 9, 2009.

flashzip 0.94 was released Mar. 25, 2009.

flashzip 0.99 was released July 23, 2009.

flashzip 0.99b4 (Aug. 25, 2009) is an archiver rather than a compressor. The -s option was renamed to -c and the -b option was increased to -b8 to allow more memory usage. For enwik8, memory usage for both -m1 and -m2 is 182 MB for compression and 162 MB for decompression. For enwik9, memory usage for -m2 is 609 MB for compression and 592 MB for decompression.

flashzip 0.99b8 (Feb. 28, 2010) has 4 compression levels from -m0 (fastest) to -m3 (best). The buffer size option was increased to -b9 (1 GB). Memory usage depends on the input size. For -m0 -c7 -b7 enwik8, compression takes 214 MB and decompression takes 195 MB. For -m1 through -m3 -c7 -b8, enwik8 compression takes 231 MB and decompression takes 195 MB. For -m3 -c7 -b8, enwik9 compression takes 658 MB and decompression takes 625 MB. Changing -b8 to -b9 has no effect on size, speed, or memory usage for enwik8, but for enwik9 it improves compression and increases memory usage to 1111 MB for compression and 1078 MB for decompression. The -s1 option enables the -b9 option. Otherwise -b9 will cause a “no memory” error.

flashzip 0.99c1 (June 1, 2011) improves compression and speed. The option ranges are -m0…-m3, -c1…-c7 and -b1…-b7. Only the maximum compression options were tested.

flashzip 0.99c3 (Oct. 10, 2011) is multi-threaded for compression in modes -m1, -m2, -m3. Decompression runs in a single thread. The archive is compatible with the previous version. In the tested mode (maximum compression), memory usage depends on the file size and climbs steadily during compression or decompression. It is the same for either, and same as the previous single threaded version.

flashzip 0.99d1 was released Oct. 31, 2011. It has only two options, -m0…-m9 (default -m4) for compression method (fastest…best) and -b1…-b7 (default -b1) for buffer size. Memory usage ranges from 30 MB at -b1 to 1100 MB at -b7.

flashzip 1.0.0 was released Oct. 3, 2012. Options -m1 to -m7 select compression -mx7 compresses best. Higher levels compress slower and use more memory but have little effect on decompression speed, which is generally faster. Decompression uses the same memory as compression, up to 1.1 GB depending on the file size. Options -b1 to -b7 select buffer size. Larger values use more memory but don’t affect speed. The default is -b4. The program can use up to 8 threads and auto-detects the number of available cores. In the high compression modes tested, only 1 of 2 available cores was used. -e creates a self extracting archive. It extracts to the saved name using both cores.

flashzip 1.1.2 was released Dec. 12, 2012. It includes a GUI that calls the command line version. The command line version was tested. The compression options were changed to -m0..-m3 and -mx0..-mx3, with -mx3 selecting maximum compression. Option -k0..-k7 select ROLZ dictionary size with -k7 using 256 MB for best compression using the most memory. -b1024 selects a buffer size of 1024 MB for best compression but using the most memory. There is a -t option for multi-threaading which defaults to -t1 to select a single thread. Using more threads makes compression worse. The -e option creates a self extracting archive by appending the compressed file to a copy of flashzip.exe, and therefore does not compress any smaller when the decompresser is included.

.2081 uharcuharc 0.6b is a free (for noncommercial use) closed source command line archiver by Uwe Herklotz, Oct. 1, 2005. In maximum compression mode (-mx) it uses PPM. In modes -m1 (fastest) to -m3 (best) it uses ALZ: LZ77 with arithmetic coding. -mz uses LZP. -md32768 selects maximum dictionary size (uses 50 MB memory, default is -m4096). Additional results for enwik8:

.2040 csarc

csc2 is a free, experimental, closed source file compressor by Fu Siyuan, Apr. 18, 2009. It uses LZP with order 1 modeling of literals and range coding over a 270 size alphabet. The program takes no options. It recognizes whether the input file is compressed, and if so, decompresses it.

csc3 v.2009.08.12 is a free file compressor with source code in C by Fu Siyuan, Aug. 11, 2009. It uses LZ77. The option -m3 selects best and slowest compression (range -m1 to -m3, default -m2). -d7 selects the maximum dictionary size (range -d1 to -d7, default -d4). -fo turns off EXE and delta filtering (default unless detected by file name extension). The decompresser size is based on csc3.exe, which is smaller than csc3compile2.exe, but does not work on some machines. It is smaller than the zipped source code (17,247 bytes). Timing is similar for both versions and a version compiled with gcc 4.4 with -O2 -s -march=pentium4 -fomit-frame-pointer.

csc31 was released Sept. 23, 2009 without source code. Discussion.

csc32 a2 (discussion), May 9, 2010, is a rewrite of csc31. The option -m3 selects maximum compression. -d9 selects maximum dictionary size. Memory usage is 528 MB for compression and 330 MB for decompression.

csc32 final, Mar. 1, 2011, has 3 compression settings from -m1 (fastest) to -m3 (best) and dictionary sizes up to -d512 (512 MB) which get the best compression but use the most memory. Compression requires memory in addition to the dictionary, but decompression does not. Source code is now available.

csarc 3.3 (discussion) is a free, open source (public domain) archiver with a LZMA like algorithm with dedupe and dictionary preprocessing of text. It was released Mar. 21, 2015. Option are compression level -m1 to -m5, dictionary size up to -d1024m (1 GB), -t1 to -t8 (number of threads, default 1) and -p1 to -p4 to split large files into 1 to 4 parts to compress in parallel (default 1). To test, I compiled from source with g++ 4.8.2.

.2044 packet

packet 0.01 is a free, experimental file compressor by Nania Francesco Antonio, May 11, 2008. It uses LZP. It takes no options.

packet 0.02, May 16, 2008, improves compression for .wav files and supports files over 2 GB.

packet 0.03b, May 20, 2008, uses LZ77, 3 MB for compression, and 1 MB for decompression. It takes an optional argument ‘x’ meaning better but slower compression, and a level 1 through 6, where 6 is slowest with best compression.

packet 0.90b, June 18, 2008, has options -m1 to -m4 (method) and -s0 to -s9 (intensity). All options use 10 MB for compression and 2 MB for decompression.

packet 0.91b, Aug. 6, 2009 has methods -m1 through -m6, where -m6 is maximum compression. Decompression requires 1.5 MB.

packet 1.0 (discussion) was released Aug. 4, 2013. Options -m0..-mx9 select compression level (default -m4). Option -t2 selects 2 threads (default -t1).

packet 1.1 (discussion) was released Dec. 7, 2013 for 64 bit Windows. It was tested in Ubuntu under wine. Option -m9 (or -mx) selects maximum compression. Default if -m4. -b512 selects maximum buffer size of 512 MB. Default is -b64. -h4 selects maximum number of buffers. Default is -h2.

packet 1.2 was released July 19, 2015.

packet 1.9 (discussion) was released Aug. 19, 2016. Option -mx selects maximum compression time. -h8 selects 2 GB hash table memory for compression (max is -h7 = 1 GB in 32 bit .exe and -h9 = 4 GB in 64 bit .exe). -b5 selects maximum buffer size 512 MB for both compression and decompression. -r (recursive) and -s (solid) have no effect for single file compression. The 64 bit version was tested under Ubuntu/Wine.

.2088 TarsaLZP

TarsaLZP Aug 8 2007 is a free, experimental file compressor with public domain source code (FASM) by Piotr Tarsa.

Older versions used order 3 LZP to code the last 16 matches at order 3, followed by order 2 PPM encoding of literals. It takes no command line options but compression/decompression settings may be specified in an initialization file. For this test, default settings were used and others were not tried.

The Jul 30 2007 version uses 2 LZP models, one with a 4 byte context and one 8 byte. The program selects the one that gives a higher probability of a match. There is no initialization file.

The Aug 8 2007 version uses 341 MB memory for compression and 333 MB for decompression.

The interim Aug 10 2007 version runs at high priority. (CAUTION, this will make your computer unusable while running).

TarsaLZP 29 Jan 2012 is distributed as Java source and class files. It has a GUI interface.

TarsaLZP 18 Nov 2012 takes several options, but defaults were used for testing. It is available as source code in Python, Java, Javascript, and C. The C version was tested by compiling with MinGW gcc 4.7.0 with options “-O3 -std=c99” in 32 bit Vista.

.2090 GRZipIIGRZipII 0.2.4 is a free, open source (LGPL) command line file compressor by Grebnov Ilya, Feb. 12, 2004. It uses BWT. The -b8m option selects the maximum block size of 8 MB..2091 4x44x4 0.2a is a free, open source file compressor by Bulat Ziganshin, June 2, 2008. It is a wrapper around GRZipII, tornado, and LZMA (7zip), and a subset of the FreeARC archiver. Source code is included in the FreeARC distribution. The program allows arguments to be passed to each compressor, plus 16 preset options. Only the fastest and slowest preset option for each compressor was tested. Options 1-7 are tornado, 8-12 are LZMA, and 1t-4t are GRZipII.

.2101 rzmrzm 0.06c (mirror) is a free file compressor by Christian Martelock, Mar. 4, 2008. It uses order-1 ROLZ as discussed here. It takes no options. Memory usage is advertised as 258 MB for compression and 130 MB for decompression. Measured values (shown) are 180 MB for compression and 104 MB for decompression.

rzm 0.07h was released Apr. 24, 2008. Advertised memory usage is unchanged.

.2104 pim

pim 2.01 is a free GUI archiver by Ilia Muraviev, based on PPMd by Dmitry Shkarin, using PPM. Version 2.01 was released June 14, 2007. It has options to model color images and .exe files. These make no difference on text and were turned off. It was timed with a watch.

pim 2.04 beta was released July 21, 2007. It has PPMd as its only option.

pim 2.10 was released July 31, 2007. Older versions are no longer supported.

pim 2.50 was released July 22, 2008. It supports 3 compression modes: store, normal, and best. Only best was tested. It compresses in PPMd, bzip2 and DCL formats and extracts BALZ, QUAD, ZIP, JAR, PK3, PK4 and QUAKE PAK archives.

.2120 CTW

CTW 0.1 is a free, command line file compressor with source code by Erik Franken and Marcel Peeters, Nov. 13, 2002. It uses CTW (context tree weighting), a type of context-mixing algorithm (with single bit prediction and arithmetic coding) combining the predictions of different order contexts. Statistics are stored in a suffix tree.

The -d6 option selects order 6 (depth of context tree). -n16M selects the maximum of 16M nodes for the tree (using 128 MB memory). -f16M selects the maximum 16 MB file buffer (for rebuilding pruned contexts). The default values of all other options were tested on enwik6 and found optimal. For -d, there is a tradeoff between compression and memory usage as with PPM compressors. -d6 was found optimal on both enwik7 and enwik8.

.2139 boa

boa 0.58b is a free, closed source command line archiver by Ian Sutton, Apr. 2, 1998. It uses PPM. The -m15 option selects maximum memory, 15 MB.

### .2144 yzx

yzx 0.01 (discussion) is a free, experimental command line archiver by Nania Francesco Antonio, May 3, 2010. It uses “LZKS” decribed as an LZ type algorithm. Option -b5 selects maximum memory. Option -m2 selects method 2 (default is -m1). -c8 selects number of match keys (range -c1 to -c8, default -c3). Memory usage is 732 MB for compression and 137 MB for decompression.

yzx 0.02, May 7, 2010, corrects a bug in compression.

yzx 0.03 was released May 21, 2010. The range of options is -m1..m2, -c1..c5, -b1..b6. Memory usage with -m2 -c5 -b6 is 404 MB for compression and 268 MB for decompression.

yzx 0.04 was released May 27, 2010. Decompression memory remains at 268 MB.

yzx 0.11 was released Jan. 4, 2012. Options -m0..-m9 select compression method (fast..slow). Options -b1..-b8 select ring buffer size (small..large). Options -h1..-h6 select search buffer size (small..large). Default is -m2 -b2 -h4. There was not enough memory to test maximum compression (-m9 -b8 -h6) without reducing either -b or -h.

.2157 zstdzstd is a free, open source (BSD) file compressor by Yann Collet, Jan. 25, 2015. It uses LZ77 and finite state entropy encoding. It takes no compression options. To test, it was compiled using the supplied Makefile with gcc 4.8.2 in Linux (note 48) and “make -CC=gcc” 4.8.1 in Windows (note 26).

zstd 0.4.0 was released Nov. 29, 2015. It features a high compression mode. -f means overwrite output. 20 is the compression level (only -1 to -9 are documented).

zstd 0.4.2 was released Dec. 2, 2015.

zstd 0.4.2_no_legacy (NL) was released Dec. 6, 2015. It is the same program with reduced source code size by dropping legacy support.

zstd 0.5.1 was released Feb. 17, 2016. The decompressor size is the source for zstd_little-0.5.1.tar.gz converted to a zip -9 archive.

zstd 0.6.0 was released Apr. 12, 2016. It adds level -22 option and adds –ultra to allow more memory usage.

tornado 0.1 is a free, open source file compressor by Bulat Ziganshin, Apr. 16, 2007. It uses LZ77 with arithmetic coding. The -9 option selects a predefined compression profile for maximum compression. There are custom options for hash table size, hash chain length, block size, type of coder, and an option to force or prohibit cache matching. Some of these options might give better compression, but were not tested.

tornado 0.3 has options -1 through -12. Each increment approximately doubles compression time and memory usage. Decompression time is fast in all cases, but memory usage is approximately 2/3 that of compression (for the LZ77 buffer). -12 caused disk thrashing and was not tested for enwik9. There are several other options that were not tested.

tornado 0.4a was released June 1, 2008. It includes Windows and Linux versions. There is a small version (tor-small.exe) which does not include some of the advanced options. The advanced options were not tested. Option -12 caused disk thrashing (2 GB memory) when enwik9 reached 80% compression, so -11 was used instead.

tornado 0.6, Mar. 8, 2014, adds optimal parsing. It has 16 compression levels. The default is -5. For testing (note 48) it was compiled from source in Linux with g++ 4.8.1 using the provided build.sh script. Windows and Linux 32 and 64 bit executables are also provided.

.2178 LZPXjLZPXj 1.1d is an experimental open source (GPL) command line file compressor by Ilia Muraviev and Jan Ondrus, May 21, 2006. The -m3 option selects maximum compression. The -e0 option turns off the exe filter (has no effect on text). The -r3 and -a0 options were tuned experimentally on enwik7. -r sets the rescale rate (range 1-5, default 3). -a0 turns off the alternate one byte matcher (default -a1 = on).

LZPXj 1.2h, Mar. 6, 2007, uses LZP + PPM with a preprocessor for x86 executables. It has just one option (1-9) which select memory usage. The default is 6. The maximum is 9. Each increment doubles usage.

.2179 scmppm

scmppm 0.93.3 is a GPL open source command line compressor for XML files by James Cheney and Joaquín Adiego, Oct. 3, 2005, and using PPMd var. I code by Dmitry Shkarin. It works by grouping XML data by tag, then compressing with ppmd (similar to XMill). scmppm is distributed as UNIX source code only. For this test it was compiled and run under WinXP using the latest version of Cygwin, g++, flex, and make as of May 24, 2006. To compile I had to add the line extern "C" int fileno(FILE*); to lex.yy.c.

The -l 9 option selects maximum compression.

### .2185 acb

acb (discussion) is a shareware archiver for DOS by George Buyanovsky. It achieved some popularity in Russia in 1997 after being described in a popular magazine there. acb uses a complex variant of LZ77 called “associative coding”. (ACB means “associative coding by Buyanovsky”). History is collected in a context sorted ring (like BWT) called a “funnel of analogies”. A string match is coded by the position of the longest (nearest) match in this data structure. The length is coded dependent on the length of neighboring matches. The result is arithmetic coded. There are 4 versions:

• acb 1.02c, Apr. 12, 1995, does simple archiving and multi-volume archiving where the archive is split into equal sized files. It requires 7.6 MB of memory. Source code is included for this version only.
• acb 1.17a, Feb. 15, 1996, has 3 compression modes: “B” fast, “b” default, and “u” maximum or slowest. It also adds error correction and detection and password protection. It uses 15 MB memory. It also has a “taught channel” mode used to patch files. In this mode a separate file is used to train the compressor and must be present during decompression.
• acb 1.23c was released June 23, 1996.
• acb 2.00a was released Apr. 25, 1997. This is the version tested with option “u” for maximum compression.

All versions limit file size to 64 MB but do not limit archive size. To test enwik8, it was divided into 2 equal parts of 50 MB and compressed into one archive. Archives are compressed in “solid” mode. enwik9 was divided into 16 equal parts of 62.5 MB each (named 01 through 16) and compressed to 16 separate archives. The compressor crashed (after 12 hours and producing 1474 MB output in 3 files) with an illegal interrupt when attempting to compress enwik9 into a single archive..2186 crushm

crushm is a free file compressor for Windows by Abhilash, July 12, 2013. It uses CM. It takes no options.

### .2190 PX

PX v1.0 is a free command line file compressor by Ilia Muraviev, Feb. 17, 2006. It is a context mixing compressor based on PAQ1 with fixed weight models..2196 DGCADGCA v1.10 is a free, closed source GUI archiver, Aug. 8, 2006. The installer is in Japanese but the program runs in several languages including English. It was tested with default settings except for producting a self extracting archive. This adds 189,936 bytes to enwik8..2200 SqueezSqueez 5.20.4600 is a commercial (60 day trial) GUI archiver by SpeedProject, Apr. 11, 2006. It supports 13 different formats, but only the native .sqx (possibly LZ77) format was tested. The options used were 2.0 format (newest), 32 MB dictionary (largest, actually uses 365 MB memory), Ultra compression (best), and all checkboxes off (including no exe or multimedia compression). There is a SFX option but using UnSqueez to decompress instead gives a smaller size..2212 fpaq2fpaq0s2 is a free, open source (GPL) file compressor by Nania Francesco Antonio, Sept, 29, 2006. It is an order 2 model based on the order 0 compressor fpaq0s by David A. Scott, which is based on fpaq0 by Matt Mahoney by modifying the arithmetic coder. fpaq0x is the same order 2 model based directly on fpaq0.fpaq0x1a is an order 3 model (hashed context) using fpaq0’s arithmetic coder. fpaq0s2b is a similar model based on fpaq0s. Both were released Oct. 1, 2006.fpaq0x1b (Oct. 6, 2006) switches between different models up to order 3.fpaq0s3 (Oct. 8, 2006) uses a simple order 0 model on groups of 3 bytes.fpaq0s4 (Oct. 12, 2006) uses a combined order 0-1-2, PPM and LZ model.fpaq0s5 (Oct. 15, 2006) improves on fpaq0s4. Memory usage is 200 MB when run at normal priority and 160 MB when run at below normal priority (WinXP Home).fpaq2 (Oct. 21, 2006) uses a combination context mixing and PPM algorithm.fpaq0s6 (Oct. 30, 2006) improves on fpaq0s5.fastari (Nov. 7, 2006) is an order 2 compressor with an all new arithmetic coder and greater speed.fpaq3 (Nov. 20, 2006) is an order 3 compressor.fpaq3b (Dec. 2, 2006) is a bitwise order 28 compressor.fpaq3c (Dec. 21, 2006) is an improved bitwise order 28 compressor.fpaq3d (Dec. 28, 2006) adds an option to fpaq3c to select memory usage from 16 MB to 2 GB. Option 6 selects 1 GB memory (the highest tested).

All programs are here.

.2217 TinyCM

TinyCM 0.1 is a free, open source (GPL v3) file compressor by David Werecat, Oct. 12, 2012. It uses an order 1-2-3-6 context mixing model. It takes one option, a single digit “level” which apparently has no effect except to store the value in the first byte of the archive. (I used “9”). Memory is the same for compression and decompression. The supplied executables require MSVCR110.dll, which I did not have, so I recompiled the source code with g++ 4.6.1 using “gcc -O3 -march=native -s *.c -I.” on a 2.0 GHz T3200 under 32 bit Vista.

.2226 dmcdmc is the original DMC compressor written by Gordon V. Cormack in 1987 and described in “Data Compression using Dynamic Markov Modelling”, by Gordon Cormack and Nigel Horspool in Computer Journal 30:6 (December 1987). The algorithm is the same as described in hook with the last 2 arguments fixed at “2 2”. The dmc argument “c 1800000000” means to compress with 1.8 GB memory. The memory size must also be given for decompression. Thus, 10 bytes (the size of the argument) was added to the decompresser size (source zipped with Info-Zip 2.31 -9). Because dmc compresses and decompresses from stdin to stdout, it was tested in Linux (Ubuntu 2.6.15.27-amd64-generic), compiled in gcc 4.0.3 x86-64 as follows:

and tested on a 2.2 GHz Athlon-64 with 2 GB memory. The compiler argument “-Dexp=expand” removes a compiler error due to a K&R style redefinition of exp()..2230 lzalza 0.01 is a free archiver for 32 bit Windows by Nania Francesco Antonio, May 29, 2014. It uses LZ77 (based apparently on zcm). Option -t selects number of threads. Default is -t1. Using a greater number of threads makes compression worse by splitting the input among threads. -h0..-h7 selects hash buffer memory 8 MB to 1 GB. Default is -h2 (32 MB). -b0..-b7 selects LZ buffer memory 8 MB to 1 GB. Default is -b3 (64 MB). Option combinations -b6 -h7 or -b7 -h6 or higher run out of memory. -m1..-m5 selects compression level (faster..better). Default is -m3.

lza 0.10 was released June 29, 2014. It improves compression and speed and adds compression levels -mx1..-mx5 for higher compression. A 64 bit version was released July 3, 2014 to support larger memory options.

lza 0.51 was released Sept. 8, 2014. A 64 bit Windows version was released Sept. 9, 2014. The 64 bit version allows the hash table option up to -h9 using 4 GB memory. It was tested using -h8 (2 GB) and -b7 (1 GB buffer). -t1 selects 1 thread (default). -mx5 selects maximum compression.

lza 0.61 was released Oct. 18, 2014. It is an update to store file dates and empty directories. The -t option is removed so it is single threaded only. -h and -b have a documented max value of 7 (1 GB memory each).

lza 0.62 is a bug fix release, Oct. 20, 2014. Additional options -r (recurse directories), -s (solid mode), -v (verbose) used in testing have no effect on compression.

lza 0.70b (discussion) was released Nov. 19, 2014. It uses ANS coding rather than arithmetic coding, based on the public domain ryg_rans coder by Fabian Giesen. ANS extends ABC (asymmetric binary coding) to larger alphabets. ANS coding theory was developed by Jarek Duda. Max compression level is increased to -mx9.

LZAwin080test was released Jan. 10, 2015.

lza 0.82b (discussion) was released Mar. 9, 2015. It is not compatible with v0.80. The 64 bit version was tested in Wine.

.2241 brotli

brotli is a free, open source (Apache license) file compressor by Google. It uses LZ77. It was tested by compiling from the Sept. 21, 2015 GitHub commit in the tools subdirectory using the supplied Makefile in Ubuntu Linux with g++ 4.8.4. The -q option selects the compression level. The default is -q 11.

The test was repeated on the release as of Feb. 18, 2016. -w 24 selects the window size. Default is -w 22.

.2276 szip

szip 1.12a is a free, open source file compressor by Michael Schindler, Mar. 3, 2000. It uses a modified BWT (a Schindler transform) which sorts using a truncated string comparison to speed the transform on highly redundant data. The algorithm is protected by patent 6,199,064 in the U.S. until Nov. 19, 2017. The first version of szip was released on June 2, 1997.

The option -b41o16 selects a block size of 4.1 MB (the maximum) and order 16, the maximum length of string comparisons. Memory usage is 17 MB (4x block size) for compression and 21 MB (5x block size) for decompression. o0 means unbounded order, which is the same as a normal BWT. The default is -b16o6.

.2282 balzbalz 1.02 is a free, closed source file compressor by Ilia Muraviev, Mar. 8, 2008. It uses LZ77 with arithmetic coding, a 512K buffer with Storer and Symanski parsing. It takes no options. Memory usage is 346 MB for compression and 18 MB for decompression.

balz 1.06, May 9, 2008, has two compression options, e for normal and ex for better but slower compression. Both options use 67 MB for compression and 48 MB for decompression.

balz 1.07 was released May 14, 2008. It uses 132 MB for compression and 95 MB for decompression.

balz 1.08 was released May 20, 2008. It uses 200 MB for compression and 126 MB for decompression. Only mode ex was tested.

balz 1.09 was released May 21, 2008. It uses 128 MB for decompression. Only mode ex was tested.

balz 1.12 was released June 3, 2008. It uses 123 MB for decompression.

balz 1.13 was released June 11, 2008. It uses 127 MB for decompression.

balz 1.15 was released as open source on July 8, 2008. It uses 67 MB for compression and 49 MB for decompression.

balz 1.20 (discussion) was released Mar. 5, 2015. It is compatible with 1.15 but faster with less compression.

.2291 lzpm

lzpm 0.02 is a free, closed source file compressor by Ilia Muraviev, Apr. 19, 2007. It uses LZ77. It takes no options.

lzpm 0.03, Apr. 28, 2007, uses more memory for compression (181 MB), but still uses 20 MB for decompression.

lzpm 0.04, May 4, 2007, uses ROLZ. Memory usage is 83 MB for compression and 20 MB for decompression. The new design uses circular hash chains for better speed on binary files, but a little slower for text.

lzpm 0.06, May 19, 2007, improves compression over 0.04 with the same memory usage.

lzpm 0.07, Aug. 6, 2007, and later versions use 280 MB for compression and 20 MB for decompression.

lzpm 0.08, Aug. 8, 2007.

lzpm 0.09, Aug. 15, 2007.

lzpm 0.10, Aug. 23, 2007.

lzpm 0.11, Sept. 5, 2007, takes the command 1..9 to choose the compression level (fastest…maximum). 1 uses greedy parsing. 2..8 use 1..7 byte lookahead. 9 uses unbounded lookahead. All modes use 723 MB for compression and 77 MB for decompression.

lzpmlite 0.11, Sept. 13, 2007, is a “lite” version of lzpm, using about half as much memory and twice as fast. Options range from 1..9 with 1 being fastest and 9 for best compression. (3 is a good compromise). All modes use 362 MB for compression and 39 MB for decompression.

lzpm 0.13 was released Dec. 1, 2007.

lzpm 0.14 was released Jan. 1, 2008. It uses 40 MB for decompression.

lzpm 0.15 was released Jan. 16, 2008. It uses 40 MB for decompression.

.2299 qazarqazar 0.0pre5 is a free, closed source command line file compressor by Denis Kyznetsov, Jan. 31, 2006. It uses LZP, an LZ77 variant where the decompresser dynamically computes the same sequence of context matches as the compressor. The compressor uses a single bit flag to indicate if the pointer computed by the decompresser should be followed. In qazar, the output symbols are arithmetic coded.

The -d9 option selects maximum dictionary size. -x7 selects maximum hash level (most memory). -l7 selects maximim search level (slowest).

### .2317 KuaiZip

KuaiZip 2.3.2 is a free GUI archiver for Windows, Sept. 9, 2011. It uses a proprietary compression algorithm, probably LZMA. It takes no compression options. On the test machine (dual core T3200), compression uses 1.5 threads (75% CPU). Decompression uses one thread. Times are reported by the application.

.2328 qcqc 0.050 is a free, closed source, command line file compressor by Denis Kyznetsov, Sept. 17, 2006. The -8 option selects maximum compression (slowest and most memory).

### .2334 ppms

See ppmonstr above.

### .2356 dzo

dzo is a commercial GUI deduplicator and archiver for Windows by Essenso Labs. A beta version (32 day free trial) dated Sept. 15, 2011 was tested. The trial version will compress either a single file or a folder. It first finds duplicate files or regions within files and produces an intermediate temporary file (file.dp) that removes the duplicates. Then it compresses the temporary file using LZMA (7zip) to file.dzo and removes it. The original files are not removed. Decompression restores a single file to (dzo)file or folder(dzo), again through a temporary .dp file. Both commands are activated by right-clicking on the file or folder to compress or the .dzo file to decompress and selecting the command from the context menu. Times are as reported by the appliation. LZMA compression is multi-threaded.

### .2428 comprox_ba

comprox_ba 20110927 (discussion) is a free, experimental, open source file compressor by Zhang Li, Sept. 27, 2011. It uses BWTS (BWT Scottified) with 4 MB blocks, followed by MTF (move to front), RLEZ (run length encoding of zeros) and arithmetic coding. BWTS is a bijective variant of BWT developed by David A. Scott in which the starting index is not stored. In BWTS, the input is factored into a sequence of lexicographically non-decreasing Lyndon words, which are then context-sorted separately. The starting indexes for the inverse BWTS are the beginnings of each word.

The program takes no arguments. It uses 103 MB (24x block size) for compression and 25 MB (6x block size) for decompression. There is a Windows and a Linux version. Only the Windows version was tested.

comprox_ba 20110928 was released Sept. 28, 2011. Compression runs in 2 threads. Both the Windows and Linux versions were tested (on different computers).

comprox_ba 20110929 was released Sept. 29, 2011. Compression is slightly improved. Both compression and decompression are now multi-threaded.

.2453 turtle

turtle 0.01 is a free, experimental, closed source file compressor by Nania Francesco Antonio, June 1, 2007. It uses PPM. It takes no options.

turtle 0.02 was released June 2, 2007. Compression is identical.

turtle 0.03 was released June 5, 2007. It is faster and improves compression slightly. The file name is stored in the compressed file.

turtle 0.04 was released June 8, 2007. It recognizes several different file types.

turtle 0.05 was released June 12, 2007. It improves compression at the cost of time and memory.

turtle 0.07 was released June 23, 2007. It includes a model for audio files.

WinTurtle 1.2 is a Windows GUI version of turtle, released Aug. 16, 2007. It uses PPM with LZP preprocessing. It detects .tar, .iso, .nrg, .wav, .aiff, .bmp, .exe, .pdf, .log and text files. Compression times are wall times. Note: the user interface is not fully functional. To compress a file, click “Drive”, click on “Buffer” until it is set to 512 MB (it does not work until you click “Drive” first, also 1 GB caused program to crash on enwik8), select “File/compress single file” from the upper menu, then select the input file and output archive from the two file dialogs. The program adds a .tur extention to the output archive. To decompress, select File/open archive, click on the file name, click Select, click Extract, and select an output folder from the file dialog.

WinTurtle 1.21, Aug. 16, 2007, fixes an unrelated bug but is otherwise the same as 1.2.

WinTurtle 1.30 was released Aug. 30, 2007.

WinTurtle 1.60 was released Jan. 1, 2008.

.2466 diz

diz is a free, experimental, open source (GPL) file compressor by Roger Flores, Aug. 3, 2012. It is a PPMC based compressor written in Python. It is distributed as source code only. The program was tested as recommended by running in pypy version 1.9.

### .2508 cabarc

cabarc 1.00.0601 is a command line archiver available for free download by Microsoft, Mar. 18, 1997 (SDK released Jan. 8, 2002). It produces .cab files, which are often used to distribute Microsoft software. It is designed for very fast decompression. It uses LZX, a variant of LZ77 with fixed Huffman coding, but with shorter symbols reserved for the three most recent matches. The option -m lzx:21 selects a window size of 221 (2 MB) for maximum compression. There is a separate extraction program, “extract”. The actual (global) decompression time of 32 sec. includes 15 sec. of CPU (process) time and the rest for disk I/O..2530 sr3sr2 is a free, open source (GPL) file compressor by Matt Mahoney, Aug. 3, 2007. It uses symbol ranking. It takes no options. There are separate programs for compression and decompression.

Compression is as follows. A 20-bit hashed order-4 context is mapped into the last 3 bytes seen in that context in a move-to-front queue, plus a consecutive hit count. Queue positions (hits) or literals (misses) are arithmetic coded using the count and an an order-1 context (order-0 if the count is more than 3) as secondary context. After a byte is coded, it is moved to the front of the queue. The hit count is updated as follows: incremented (max 63) if the first byte is matched, set to 1 if any other byte is matched, or set to 0 in case of a miss.

sr3 (mirror) is a modification by Nania Francesco Antonio, Oct. 28, 2007. The context table size is increased from 4 MB to 64 MB, which effectively increases the context from order-4 to order-5. This helps compression on larger files, but makes it worse for some smaller files. The program also detects file type. For .bmp files, the order is decreased. For .wav files, the input is split into separate 1 byte wide streams for each audio sample. There is no separate compressor and decompresser program.

sr3.exe was recompiled on July 23, 2009 without upack to remove antivirus false alarms, resulting in a larger executable. The new size is shown using source code.

.2540 bzip2

bzip2 1.0.2 is an open source command line single file compressor by Julian Seward, released Dec. 30, 2001. It uses BWT. The -9 option selects maximum compression.

bzip2 1.0.3 (May 22, 2005) compresses very slightly larger but is faster, as shown by the following table. The decompresser size is based on zipped bunzip2.exe. This is smaller than the source (724,919 bytes as a zip download).

.2542 RH5

RH is a free, experimental file compressor by Nauful, Feb. 17, 2014. There are two versions, RH and RH2. RH uses order 3 ROLZ and Huffman coding, using 8 MB memory. RH2 has 3 compression levels using 64 MB memory. Level c1 uses LZP. c2 uses order 1 ROLZ with limited search. c3 uses full search. A literal is coded with 1 bit plus the value. A match is coded with 1 bit to signal a match, 8 bits for the length, and 12 bits for the index into the ROLZ table.

The 32 and 64 bit Windows .exe versions produce incompatible archives. The 32 bit version was tested in Windows. The 64 bit version was tested in Ubuntu under Wine 1.6.

RH2 20Feb2014, released Feb. 27, 2014, has 5 compression levels c1..c5.

RH4_x64, Mar. 22, 2014 is an archiver with file-level deduplication and compression improvements. It has 6 compression levels. There are several earlier versions without version numbers that were not tested.

RH5 was released Nov. 11, 2014. The 64 bit Windows version was tested in Ubuntu/Wine. It has options c1..c6 to select the compression level (default c2), default -window:23 to select 2^23 byte window size. Larger windows compress better with more memory up to 27, but above that has no effect. Options -hash:13 and -table:12 select the default hash table sizes and index table sizes. Higher or lower values compress worse. -skip-checksums is not used because it has no effect on compression. However it skips a check for duplicate files when creating an archive from a directory. It would make compression worse in that case.

.2545 RangeCoderCRangeCoderC v1.2 (discussion) is a free, experimental open source file compressor by David Catt, Nov. 23, 2011. The option 26 selects a simple bitwise order 26 model. An order n model requires 16*2n bytes of memory.

RangeCoderC v1.3, Nov. 25, 2011, has 3 versions. The standard version is compatible with v1.2 but uses half as much memory. The “double” version uses a main model to select among several sub-models to improve compression at a cost in speed and memory. There is also an “indirect” version that was not tested because there was no 32 bit Windows version.

RangeCoderC v1.4 was released Nov. 28, 2011. It has 4 versions: standard, double, indirect, and a new version, hashed, which computes a hashed context and gives the best compression.

RangeCoderC v1.5 was released Nov. 29, 2011. It combines the 4 models from v1.4 into one program and includes the model type in the archive header. Option c3 selects the hashed model. It gives the same size as v1.4. The other models were not tested.

RangeCoderC v1.6 was released Dec. 1, 2011. It has 6 compression modes selected by options c0 through c5 as follows:

c1 failed on enwik8. It produced a “compressed” file about 2.5 GB which decompressed incorrectly. The other modes were tested at the highest order allowed by the 2 GB memory space available in the 32 bit version.

RangeCoderC v1.7 alpha, Dec. 5, 2011, fixes the bug in c1 mode in v1.6. The other 5 modes are presumably the same and were not tested. It is a pre-release of version 1.7, released without source code.

RangeCoderC v1.7, Dec. 9, 2011, adds two new compression modes:

The Bytewise Hashed model uses the hash and cache structure from ZPAQ to achieve high speeds, even at higher orders. The Combined Model uses the same structure as the Double Model but has a hashed context and outputs its predictions into a SSE model for better compression.

RangeCoderC v1.8, Dec. 13, 2011, removes two obsolete modes and adds one mode: “The Bitwise Adaptive Model uses probabilities instead of counts, which are adjusted nonlinearly for better compression on changing data. The learning speed of the model is derived from the model order.” The modes are:

Only the new mode (c2) was tested.

quad is a free file compressor by Ilia Muraviev. Only the latest version (now open source) is supported, so only that version appears in the main table.

As described by the author: QUAD uses ROLZ compression (Reduced Offset LZ). It makes use of an order-2 context to reduce the offset set that is matched to. This can be regarded as a fast large dictionary LZ. Literals and Match Lengths fits in a single alphabet which is coded using an order-2-0 PPM with Full Exclusion. Match indexes are coded using an order-0 model. QUAD uses a 16 MB dictionary. For selectable compression speed and ratio, QUAD uses different parsing schemes: with Normal mode (Default) QUAD uses a Lazy Matching; with Max mode (-x option) QUAD uses a variant of Flexible Parsing. In addition, QUAD has an E8/E9 transformer for better executable compression which is always enabled.

quad 1.01a (Dec. 24, 2006) used LZ77. It was closed source and took no options.

quad 1.04a (Feb. 8, 2007) used LZP. Memory was expanded for this version only, however it is no longer supported.

quad 1.07beta (Feb. 22, 2007) included the “x” option for better compression.

quad 1.08 was released Mar. 12, 2007. Quad became open source.

quad 1.10 was released Mar. 19, 2007. -x selects maximum compression.

quad 1.11 (Apr. 4, 2007) uses ROLZ.

quad 1.11HASH2 (Apr. 5, 2007, experimental, no source code) produces the same size archives, but uses a hash table for faster compression.

quad 1.12 was released Apr. 7, 2007.

.2572 WinACE

WinACE 2.61 is a shareware GUI/command line archiver, Mar. 8, 2006. It compresses in ACE and ZIP formats and decompresses many others. ACE decompresses much faster than it compresses, suggesting it is based on LZ77. The option -m5 selects maximum compression. -d4096 select maximum dictionary size of 4MB (default is -1024 = 1MB). -sfx creates a self extracting archive, which adds less space than the program itself.

.2589 lzsr

lzsr 0.01 is a free file compressor for Windows by Nania Francesco Antonio, Oct. 1, 2011. It is described as using a “fusion of LZ77-LZP and SR” and arithmetic coding. It takes no options.

### .2595 zling

zling (discussion) is a free, open source (BSD license) file compressor by Zhang Li, Nov. 1, 2013. It uses order 1 ROLZ, based on the order 3 ROLZ compressor zlite. It takes no options. The compressor is C source code only. To test, it was compiled with gcc 4.8.0 -O3 for 32 bit Windows.

zling (discussion) was updated Dec. 25, 2013. It was tested in Ubuntu with gcc 4.8.1 and Boost_1_55_0 using the supplied Makefile.

zling 20140121 (discussion), Jan. 21, 2014, has some optimizations, and removes Boost. It was tested by compiling with g++ 4.8.1 -O3 in Windows and with the supplied Makefile in Linux.

libzling 20140219, Feb. 19, 2014, separates the program into compression API and a simple demo program. It was tested by building the demo using cmake under Linux as recommended in the readme file.

libzling 20140324 was released Mar. 24, 2014. The demo program has 5 compression levels.

libzling 20140414 was released Apr. 14, 2014. It is faster with better compression.

libzling 20140430-bugfix (discussion) was released May 4, 2014.

libzling 20160107 was released Jan. 5, 2016 and updated Jan. 7, 2016.

.2625 xpv5

xpv5 is a free Windows command line file compressor by Abhilash Anand, Oct. 20, 2011. It is described as using ROLZ with an order 1 back end. It has 3 compression levels: c0, c1, c2. All levels use 9 MB memory for compression or decompression. It is single threaded.

.2660 sr3csr3c 1.0 is a free, open source (MIT license) file compressor and library by Kenneth Oksanen, released Nov. 27, 2008. It uses symbol ranking, based on ideas from SR3, but completely rewritten in C. The distribution contains a portable compression engine and source code for drivers for UNIX/Linux. To test, I wrote a simple driver for Windows (sr3cw) and compiled it using gcc 3.4.5 -O3 -fomit-frame-pointer -march=pentiumpro -s and included sr3cw.exe in the distribution. The driver takes no options..2665 lzclzc v0.01 is a free, closed source file comprssor by Nania Francesco Antonio, May 8, 2007. It uses an LZ77 like algorithm. The option 4 selects the maximum memory mode, 1 GB + 100 MB for compression and 16 + 100 MB for decompression. The actual memory usage indicated by Windows Task Manager in this mode was 360 MB for compression and 107 MB for decompression.

lzc 0.03 was released May 11, 2007.

lzc 0.04 was released May 16, 2007. All versions up to 0.04 use 107 MB memory for decompression.

lzc 0.05b was released May 26, 2007. It has options from 1 (fastest) to 16 (best compression). It uses 771 MB to compress and 390 MB to decompress.

All versions through 0.05b are linked in the above archive.

lzc 0.06b was released Aug. 27, 2007. It uses 790 MB (peak) for compression and 409 MB (peak) for decompression.

lzc 0.07 was released Oct. 24, 2007. Options range from 1 (fastest) to 10 (slowest).

lzc 0.08 was released Nov. 15, 2007. It improves BMP and WAV compression.

.2774 nakamichi

Nakamichi 2019-Jul-01 is a free, open source file compressor by Georgi Marinov, July 1, 2019. It uses LZSS. On the test machine it takes 95 days and 302 GB of memory to compress and 1.3 seconds and 2 GB to decompress (memory to memory).

.2794 crushcrush 0.01 is a free, experimental file compressor by Ilia Muraviev, May 17, 2011. It uses LZ77. It has 3 compression modes: cf (fast), c (medium), and cx (best). Compression in all modes use 143 MB memory, and decompression uses 65 MB.

Source code (public domain) was released on June 26, 2013. The file format consists of 64 MiB blocks with a 4 byte header in machine dependent (LSB first for x86) order giving the block size. Literal and match codes are packed LSB first and padded with trailing 0 bits in the last byte. Codes are as follows:

A match code is followed by 2 fields (call them L and P) giving the offset. L is 4 bits, and gives the length of P. If L is 0000, then P is 5 bits and the offset is P + 1 (1..32). If L is in 1..15, then P is L + 4 bits long and the offset is 2L+4 + P + 1 (33..220). A match is decoded by going back offset bytes in the output and copying the specified length to the output.

The compressor maintains an index for finding matches consisting of two hash tables of size 221 for strings of length 3 and 224 for strings of length 4. The second table is maintained as a linked list. The two rolling context hashes are computed by shifting the current hash 7 or 6 bits left, respectively, adding the next byte, and chopping off the high bits. It tests the length 3 hash first, then follows the linked list of length 4 hashes to find the best match for up to 4, 256, or 4096 locations in the input buffer for compression options cf, c, and cx respectively. In addition for option cx, the compressor looks ahead one byte and codes the current byte as a literal if starting at the next byte produces a better match. A match is better if it is longer with a penalty of log16 offset plus one for the literal in case of looking ahead. The minimum match length is 3 for offsets less than 64 KiB, otherwise 4.

To save memory, only the last 220 linked list pointers are saved in a rotating queue. As a speed optimization for testing matches, the first and last byte at the current best match length are tested first, then the rest of the string.

crush 1.00 (discussion) was released July 1, 2013. It increases the window size from 220 to 221, thus increasing the minimum and maximum length of an offset code by 1 bit, i.e. if L is 0 the P is 6 bits (1..64) and if L is in 1..15 then P is L + 5 bits (65..221). Also, the penalty for coding a match offset is changed to log8(offset/16).

.2836 xeloz

xeloz 0.3.5.3 is a free, open source (MIT license) file compressor by xezz, Sept. 7, 2014. It uses LZ77 with the following possible code lengths:

• 9 = literal, one byte uncompressed.
• 18 = match: 12 bit offset (0..4095), 4 bit length.
• 19 = match: 12 bit offset (4096..7935), 4 bit length.
• 27 = match: 13-20 bit offset, 4-11 bit length (depends on window size).
• 35 = match: 24 bit offset, 4 bit length.

Option c889 selects maximum compression. c indicates a sliding window. The first digit 8 selects 216+8 bytes = 16 MB block size (default is 4 = 1 MB). The second digit 8 selects the parsing method where 0..2 is greedy, 3..5 is lazy, and 6..8 is optimal and uses a suffix array (libdivsufsort) to find matches, and higher number compress slower but better. Default is 6. The third digit 0..9 (default 2) selects encoding level, where 9 is slowest with best compression.

xeloz 0.3.5.3a, Sept. 12, 2014, fixed a bug that caused version 0.3.5.3 to crash when decompressing files compressed with uppercase option C. The option selects a fixed rather than a sliding window for faster compression.

.2839 bzpbzp 0.2 is a free file archiver by Nania Francesco Antonio, Sept. 16, 2008. It uses LZP and arithmetic coding. It takes no options. Earlier versions (0.0, 0.1) were not tested..2857 haha 0.98 is a free command line archiver by Harry Hirvola, Jan. 7, 1993. A later version, 0.999b, is available for UNIX with source code and ports to DOS. It uses order-5 PPMC (PPM with fixed escape probabilities for dropping to a lower order context. Newer PPM compressors (PPMZ, PPMII) use adaptive escape probabilities given a small context.) The command a2 selects compression method HSC (default is a1 = ASC). a21 automatically chooses the best method. Time is ns/byte.

.2910 ulz

ulz 0.01 (discussion) is a free, experimental file compressor by Ilia Muraviev, Feb. 1, 2010. It uses LZ77 with bytewise encoding. The options c1 through c5 select the compression level from fastest to best. The option does not affect memory usage. All levels use 43 MB for compression and 33 MB for decompression.

ulz 0.02 adds a new faster mode (c1). Options c2 through c6 are the same as c1 through c5 in ulz 0.01.

ulz 0.03 was released June 26, 2016. It is byte aligned LZ77 similar to LZ4 but with 16 MB blocks and 256 KB window. It has 3 compression levels: cf, c, cu (fast, normal, ultra). Level cu uses optimal parsing.

ulz 0.06 was released July 13, 2017. It has 9 compression levels, c1 to c9.

.2924 irolzirolz source code is a free, open source (GPL), experimental file compressor by Andrew Polar, Sept. 26, 2010. It uses ROLZ. The algorithm is like LZ77 except that match offsets are coded by counting previous occurrences of the current context in the history buffer rather than as pointers. In irolz, the context is order 2. Previous occurrences are stored in a linked list with a maximum length of 31 (5 bit offset). Matches less than 4 bytes are coded as literals. Symbols (match flags, 5 bit offsets, 8 bit lengths, and 8 bit literals) are binary arithmetic coded. Lengths and literals are coded in an order 2 context model. Match flags and offset counts are modeled without context. Each symbol and context to be predicted is mapped to 2 16-bit predictions, one fast adapting (learning rate 1/8) and one slow adapting (rate 1/64). The prediction is the average of the two.

Only source code is available. For this test, the program irolz.cpp was compiled using g++ 4.5.0 on a 2 GHz T3200 under 32 bit Vista with options -O2 -march=pentiumpro -fomit-frame-pointer -s.

### .2961 lcssr

symbra 0.2 is a free, open source (GPL) (mirror with .exe) file compressor by Frank Schwellinger, Nov. 29, 2007. It uses symbol ranking. Only source code (C++) is provided. For the test, the program was compiled as indicated in the source comments and tested in Windows XP (32 bit). The option -c4 or -c5 selects order 4 or 5 context. -m5 turns on suffix matching with maximum buffer size, which greatly slows compression. -p2 selects 2 passes, which reorders the alphabet by descending frequency. The defaults are -c4 -m0 -p1.

lcssr 0.2 (Dec. 3, 2007, same website) (mirror with .exe) is derived from symbra. It drops the secondary symbol queue and instead uses a variable length context based on the length of the longest match as with LZ77/LZP. The option -b7 selects a 1152 MB buffer for finding context matches.

.2984 zlite

zlite is an open source file compressor by Zhang Li, Aug. 20, 2013. It uses ROLZ. It was released as C source code only. To test, it was compiled with MinGW gcc 4.8.0. with option -O3. zlite takes no options.

### .3062 lazy

lazy v1.00 is a free, open source file compressor by Matt Mahoney, Oct. 10, 2012. It uses LZ77. It has 5 compression levels from 1 to 5. Higher levels are slower and use more memory to compress. However decompression speed does not change and always uses 16 MB.

The LZ77 format codes literals uncompressed after a length code. Matches can have an offset in the range 1 to 224-1 and length 4 to 224-1. Literals are coded as 00,N,L[N], where N is the number of literals to follow coded in marked binary. A marked binary number discards the leading 1, then precedes each bit by a 1 and marks the end with a 0 bit. For example, 5=101 would be coded as 1,0,1,1,0. Matches are coded as 5 bits to indicate the number of offset bits (where the first 2 bits are not 00) in the range 0..23, then the match length as a marked binary number except for the last 2 bits, then the low 2 bits of the match length are coded directly, and then 0 to 23 bits of the offset without the leading 1 bit.

Compression is achieved in a 16 MB sliding window implemented as a pair of buffers. A hash table of 219 buckets of 2level (2..32) pointers each, indexed by an order 4 context hash, maintains pointers for finding matches. The longest match of length at least 4 is coded, except that if the offset is over 64K and the last symbol is a match, then the minimum length is 5.

.3085 zhuffzhuff 0.1 is a free file compressor for Windows by Yann Collet, Dec. 13, 2009. It is described as a combination of LZ4 and Huff0, a fast Huffman coder. LZ4 uses LZSS, an LZ77 variant using flags to identify matches and literals. It requires the Microsoft runtime libraries, which are not included in the program size shown.

zhuff 0.7, Mar. 15, 2011, is multithreaded. It automatically detects the number of cores and compressses or decompresses in parallel, or the number can be changed with -t. However, since the program is already faster than disk I/O with one thread, using more threads makes no difference in practice. Speeds shown below are total process times. Actual times are 17 seconds to compress and 44 to decompress with either -t1 or -t2. Compressed size is the same either way and the archives are compatible but not identical.

zhuff 0.8 (discussion) has 3 compression levels, from -c0 (fastest) to -c2 (best). All are multithreaded, but decompression at all levels and compression with -c0 is I/O bounded (about 40 seconds). Times are process times for these cases, and real times for -c1 and -c2 compression.

zhuff 0.95b was released Jan. 27, 2014. zhuff 0.97 beta was released Feb. 2, 2014. Both programs were tested using the 64 bit Windows version under Ubuntu Wine. There are also 32 bit Windows versions that produces identical compressed files.

.3092 slugslug v1.1b (mirror) is a free, closed source file compressor by Christian Martelock, Apr. 26, 2007. It uses an LZ type algorithm with a 128K non-sliding window and Huffman coding. It is designed for high speed and low memory usage. System (wall) times for enwik9: 18 (51) seconds for compression, 14 (30) for decompression.

slug 1.27, May 7, 2007, uses a ROLZ variant with a 8MB non-sliding window and semi-dynamic Huffman coding trees rebuilt every 4KB (more frequently near the beginning of a file).

.3098 pigz

pigz 2.2.3 is a free command-line file compressor for Linux, Jan. 15, 2012. It uses the deflate (LZ77) format for compatibility with gzip, but is multi-threaded for better speed at a small cost in compression ratio. -9 selects best compression. Decompression is single-threaded and I/O bound.

pigz is distributed as source code only. It requires linking with zlib version 1.2.3 or higher. For this test, pigz was compiled using the supplied Makefile under Ubuntu Linux with g++ 4.6.1 and linked to zlib 1.2.5. Decompression was tested with unpigz, compiled similarly. It was tested on a 2.66 GHz Core i7 M620 (2 cores x 2 hyperthreads per core) as in note 48. Virtual memory usage was measured with top at 115 MB for compression and 33 MB for decompression. Resident memory usage was 2 MB. Compression time is real time at about 350% CPU usage. Decompression is I/O bound (less than 100% CPU), so CPU time is reported. gzip is shown for comparison.

pigz 2.3, Mar. 4, 2013, adds option -11 implelemting Google’s zopfli algorithm, a very highly optimized and slow implementation of deflate. Decompression speed is not affected and is compatible with gzip. The test program was built from source code in Ubuntu using the supplied Makefile with g++ 4.6.3.

.3102 kzipkzip is a free, closed source command line compressor by Ken Silverman, compiled May 13, 2006, released May 18, 2006. It is an optimizing compressor producing zip-compatible archives but with better compression. The option /b512 sets the block splitting threshold. The default is /b256, but /b512 was found optimal on enwik8. /s0 (default) selects maximum compression and ranges from /s0 to /s3. No decompresser is included, but archives can be read with any program that reads zip files (pkzip, unzip, 7zip, WinRAR, WinACE, etc).

.3128 uc2

uc2 (UltraCompressor II revision 3 pro) is a commercial (free for noncommercial use) command line and GUI archiver for DOS by Nico de Vries, June 1, 1995. It uses LZ77 and Huffman coding. The -tst option selects maximum compression.

uc2 includes a program for converting archives to self extracting programs (uc2sea) which produced smaller files (enwik8.exe = 35,397,343 bytes, enwik9.exe = 312,759,499 bytes), but in this mode decompression failed for enwik9, truncating the last 21 bytes of output. uc2sea works by first extracting the archive and then recompressing it using a slightly different algorithm.

### .3141 thor

thor 0.9a is an experimental, closed source, command line file compressor by Oscar Garcia, Mar. 19, 2006. It is the fastest compressor on the maximumcompression benchmark. It has 3 modes: ef (fastest), e (normal) and ex (best). However in this test it appears speed may be limited by disk I/O.

thor 0.94 alpha (mirror) (mirror) was relesed Apr. 22, 2007. exx is a new mode to select maximum compression. Times shown are process times excluding disk I/O. Actual times are 96 sec. to compress, 75 sec. to decompress).

thor 0.95 (mirror), May 8, 2007, has 5 compression options: e1 through e4 are LZP in order of increasing compression; e5 is LZ77. Note that e5 is best on enwik8 but e4 on enwik9.

thor 0.96a, Aug. 23, 2007, works like 0.95.

.3148 etincelle

etincelle alpha 3 is a free file compressor by Yann Collet, Mar. 26, 2010. It uses ROLZ with an order 1 context to reduce the offest length, followed by Huffman coding.

### .3196 lz5

lz5 1.3.3 is a free, open source file compressor by Przemyslaw Skibinski, Jan. 5, 2016. It is a modification of lz4 by Yann Collett. It uses byte-aligned LZ77 codes as follows:

MMM codes the match length from 3 to 9. If MMM = 111, then an additional byte is used to code match lengths of 10 to 265. LL or LLL is the 2 or 3 bit literal length (0..3 or 0..7) following the match.

lz5 was compiled using gcc 4.8.4 with the supplied Makefile for Ubuntu. Option -0 through -18 selects the compression level (fastest..best). Default is -0.

.3211 gzip124hackgzip124hack (mirror) (discussion) is a modified version of gzip 1.2.4 by Ilia Muraviev, Aug. 13, 2007. It uses LZ77. It is a file compressor like gzip, except that it does not delete the input file. It improves compression by using LZ77 lazy matching with 2 byte lookahead. The compressed format is compatible with gzip. -9 selects maximum compression..3224 dobozdoboz 0.1 is a free, open source file compressor by Attila T. Áfra, Mar. 18, 2011. It uses LZ77. It is both a compression library and a simple single-threaded file compressor which takes no options. To test, the supplied compressor for 32 and 64 bit Windows was tested. The 32 bit version crashed while compressing enwik9, possibly due to reading the whole file into memory. The 64 bit version succeeded under Ubuntu/wine..3226 gzip

gzip 1.3.5 is an open source single file command line compressor by Jean-loup Gailly and Mark Adler, Sept. 30, 2002. It uses LZ77 (flate, but not compatible with zip). The -9 option selects maximum compression although its effect is small (see below).

.3226 Info-ZIP

Info-ZIP 2.3.1 (Mar. 8, 2005) is a free, open source archiver for many operating systems. It uses the standard LZ77 “flate” format, like gzip and many zip-compatible programs. (The sizes are exactly 125 bytes larger than gzip). This test was under Linux (Ubuntu 2.6.15.27-amd64-generic) on a 2.2 GHz Athlon-64. Uncompression was with UnZip 5.52 (Feb. 28, 2005), both part of the normal Ubuntu distribution. The -9 option selects maximum compression.

The Windows version 2.32 is dated June 19, 2006.

Info-ZIP 3.00 was released July 7, 2008. Decompression was tested with UnZip 6.00, released Apr. 29, 2009.

.3234 pkzip

pkzip 2.04e is a commercial (free trial) command line archiver by PKWARE Inc. written Jan 25, 1993. It uses LZ77 (flate format). The option -ex selects maximum compression. The decompresser is pkunzip 2.04e. Times are wall times. (Timer doesn’t show process times for DOS programs).

There are many programs that produce zip files. I don’t plan to test them all.

.3237 jar

jar 0.98-gcc is an open source command line archiver by Bryan Burns, 2002. It uses LZ77 (zip). It is included with Java (1.5.0_06) and is normally used to create .jar files for compiled Java applications and applets, but it can also be used as an archiver. It has no compression options. The cvf options creates an archive. The M option says to not add a manifest file.

Note: this is not the jar compressor from Arjsoft.

### .3244 PeaZip

PeaZip 1.0 by Giorgio Tani (Nov. 6, 2006) is a GPL open source GUI archiver supporting several common formats. The format tested is the native format which uses zlib (gzip algorithm). The “better” option chooses best compression (equivalent to gzip -9). Integrity check (checksum) and encryption are turned off.

### .3286 arj

arj 3.10 is a free, open source (GPL v2) archiver by ARJ Software Russia, June 23, 2005. It is compatible with the original ARJ by Robert K. Jung, which was patented (U.S. patent 5140321 A) filed Sept. 4, 1991 and presumably expired. According to the patent, it uses LZ77 with flags to indicate a repeat of the last match (like LZX used in cabarc). Matches are found from a hash table of FIFO queues.

The options -m0 through -m4 select compression level. The default, -m1, gives maximum compression. -m0 stores with no compression. -m1 through -m4 compress progressively larger but faster, with slower decompression.

.3344 lzgt3a

lzgt1 (click on lzgt3a.zip) is one of a group of free, open source, experimental file compressors by Gerald R. Tamayo, released July 17, 2008. It uses LZT (Lempel-Ziv-Tamayo) compression, a LZ77 variant in which the decompresser rebuilds a list of matches sorted by context match length and the match length is implied or partially implied by the position in the list. lzgt implements LZT using a 4K sliding window, 32 byte look-ahead buffer and 3 bit code length. lzgt1 is like lzgt but uses a 16K sliding window and 128 byte look-ahead buffer. lzgt2 eliminates the code length entirely. lzgt3 is an improved version of lzgt2. All programs have separate decompressers (lzgtd1, etc) and are compiled for DOS (and Windows).

lzgt3a was added Oct. 25, 2008. It uses a 128K window size, 64K lookahead buffer, and improved coding.

.3388 lzuf

lzuf is a free, experimental open source file compressor by Gerald R. Tamayo, Apr. 15, 2009. It uses LZ77 with folded unary encoding of match lengths. It takes no arguments. It has a separate decompression program, lzufd.exe.

### .3502 pucrunch

pucrunch is a free, open source file compressor by Pasi Ojala, last updated Mar. 8, 2002. It uses a combination of run length encoding (RLE) and LZ77 with Elias Gamma coding of the offsets and run lengths. The original version was written on Mar. 14, 1997 for the Commodore series (Vic 20, Commodore 64, Commodore 128 and Commodore Plus 4/C16) in 6510 assembly language, with updates on Dec. 17, 1997 and Oct. 14, 1998. The 6510 is a 1 MHz, 8 bit microprocessor with 3 registers, 16 bit (64K) address space, no cache, no pipelining, 8 bit ALU, no multiply or floating point instructions, and no support for multitasking or virtual memory. The decompresser was designed to execute quickly in this environment with only a few hundred bytes of memory.

The most recent version was written in Visual C and ported to Windows as a cross compressor intended to produce self extracting archives for the Commodore. By default, pucrunch appends a 276 byte header containing 6510 code to extract the file. There are also standalone decompressers written in 6510 assembler and in Z80 assembler. I could not test in these environments, so I used the -d -c0 options to turn off the self extracting feature, which requires the (larger) Win32 external compressor/decompresser.

There are two additional limitations. First, the decompresser appends a 2 byte header to indicate the load address, which is required by the Commodore. To make the decompressed file bitwise identical, this must be stripped off. Second, the input file size is limited to 64,936 bytes. The author tested a modified version without a file size limit on the Calgary corpus, but this modified version was not posted, so I did not use it.

To overcome these limitations I wrote the following Perl scripts to compress and decompress. The first script compresses by splitting the input into blocks of 64,936 bytes, compressing them separately, and appending the compressed files each with a 2 byte header to indicate the block size. The second script decompresses each block one at a time, strips off the 2 byte Commodore header, and appends them. Each script takes the input and output files as command line arguments. The second script is included in the decompresser size.

pucrunch suggests using -p1 and -m6 options to improve compression but these do not help.

Run times are wall times. Using scripts, Timer 3.01 does not provide useful process times, since it times Perl rather than pucrunch. The decompression time (463 sec) is probably high because Windows Task Manager shows that pucrunch is running only a small fraction of the time, perhaps 10%. Most of the time is probably the overhead of file I/O and running pucrunch 15,400 times.

### .3619 packARC

[packARC v0.7RC11](https://www.dropbox.com/s/uq0nwgvr12ylut4/packARC v0.7RC11 (beta!) (GPL).zip) (discussion) is a free, open source (GPL v3) archiver by Matthias Stirner, Dec. 7, 2013. It incorporates packJPG (JPEG compressor), packMP3 (MP3 compressor) and packPNM (BMP, PPM, PGM, PBM image compressor). Other file times are compressed with a simple context model and arithmetic coder. Option -sfx creates a self extracting archive. Option -np tells the program not to pause when done. For this test, the source was compiled with MinGW g++ 4.8.0 using the supplied buil_packarc.bat for 32 bit Windows..3626 urbanurban is an open source file compressor for Unix by Urban Koistinen, Apr. 30, 1991. The program is an order-2 indirect context model with bitwise arithmetic coding. A hash of the last two whole bytes plus the previously coded bits of the current byte (MSB first) are mapped to a hash table of size 710123. Each table element contains a count of 0s and 1s in the range 0 through 8, and a hash verification consisting of a second hash. When a collision is detected, the counts are reset to 0. Otherwise, the appropriate count is incremented and both are halved if either exceeds 8.

The pair of bit counts and the character count mod 3 (probably unnecessary) are mapped to a second table of counts to compute the next-bit probability. That table is updated by incrementing the appropriate count and halving both if the sum exceeds 60000. The initial mapping of this second table is (n0,n1) to (n0,n1) except if either of the input counts is 0, in which case the mapping is (0,n1) to (1,1+2^n1) or (n0,0) to (1+2^n0,1). The final bit prediction is n1/(n0+n1).

The program was a submission to a data compresssion context for Dr. Dobbs Journal. To test, the source code was compiled using make and tested in Linux. It compresses and decompresses from standard input to standard output. It takes no options.

### .3663 lzop

lzop v1.01 is a free, open source (GPL) command line file compressor by Markus F.X.J. Oberhumer, Apr. 27, 2003. A newer version, 1.02 rc1 was released July 25, 2005, but no Win32 executable was available for download as of May 29, 2006. lzop uses LZ77. It is designed for high speed. -9 selects maximum compression. lzop is I/O bound. timer 3.01 reports the decompression process time as 12 seconds. The remaining 38 seconds is due to disk access.

### .3676 lzw

lzw v0.1 is a free, experimental file compressor by Ilia Muraviev, Jan. 30, 2008. It uses LZW with 16 bit code words. It takes no options.

lzw v0.2 was released with public domain source code for the decompresser, which zips to 671 bytes. The file format is as follows. There is no header or trailer. Each 16 bit code word is in machine dependent order (LSB first on x86). Codes 0-255 represent single bytes of the same value. Codes 256-65535 are assigned in ascending order by concatenating the decoded values of the previous two codes. After assigning code 65535, new codes are assigned by replacing the oldest codes first, starting with 256. Data is decoded into a rotating buffer of size 16 MiB (224 bytes) by copying a string from elsewhere in the buffer. Neither the original nor copied string crosses the buffer boundary, and they do not overlap each other. No new symbol is added after decoding the first byte of the buffer.

.3701 MTCompressor

MTCompressor v1.0 (discussion) is a free, experimental command line compressor for Windows by David Catt, Jan. 20, 2012. It uses an LZ77 variant similar to deflate. It is multi-threaded. Reported time is real time running on 2 cores (note 26). Memory usage fluctuates during use. The peak is reported.

### .3721 lz4x

lz4opt v1.00 is a free, closed source file compressor for 32 bit Windows by Ilia Muraviev, Feb. 9, 2016. It is compatible with LZ4, an LZ77 compressor. Options cf, c, cb compress fast, normal, and best respectively.

lz4x v1.02 was released Apr. 6, 2016. The options c1..c4 compress faster..better with LZ4 compatibility.

.3790 arbc2z

arbc2z is a free, experimental command line file compressor with source code by David A. Scott, June 23, 2006. It is a bijective order-2 (PPM) arithmetic coder. A bijective coder has the property that all inputs to the decompresser are valid and produce distinct outputs. The above archive also contains arbc2, which uses a different method of handling of the zero frequency problem, arbc1 (order 1), and arbc0 (order 0), all of which are bijective.

.3800 lz4

lz4 v0.2 (website) is a free file compressor by Yann Collet, Oct. 16, 2009. It uses LZSS (an LZ77 variant with flags to mark literals and matches). It takes no options. Run times are dominated by disk access.

lz4 0.6 was released Dec. 12, 2010. lz4hc 0.9 (Dec. 13, 2010, same link) is a compatible version with better compression. In both cases, run times are dominated by disk access. Times shown are process times. Actual times were 80+37 sec. for lz4 and 137+39 sec. for lz4hc. The programs take no compression options.

lz4 v1.2 was released Oct. 10, 2011. It has 3 compression levels (c0…c2). The program automatically detects the number of cores (2, note 26) and uses the same number of threads. However compression in mode c0 and all decompression modes are I/O bound, using about 20% of available CPU. For these modes, process time is reported. Compression modes c1 and c2 are real times with both cores fully utilized.

.3802 lzss

lzss 0.01 (withdrawn) is a free, experimental file compressor by Ilia Muravyov, Aug. 1, 2008. It uses LZSS, a byte aligned LZ77 variant with matches encoded with an 18 bit pointer and 6 bit length field, and 1 bit flags to distinguish matches from literals. It is discussed here. Compression options are e (fast) or ex (smaller). The program is designed for fast decompression. The program uses 625 MB for compression and 33 MB for decompression.

lzss 0.02 (discussion) was released Feb. 7, 2014. Options cf, c, cx select fast, medium, and best compression.

.3894 xdeltaxdelta 3.0u is a free, open source command line file compressor by Joshua McDonald, Oct. 12, 2008. It uses LZ77. The program is a delta coder, meaning it will output the compressed difference between two files, and then decompress the second file when given the first file uncompressed. It allows the first file to be omitted, in which case it simply compresses. This is how the test was done. -9 specifies maximum compression..3901 BriefLZ

BriefLZ 1.05 is a free, open source (C and MASM) file compressor by Joergen Ibsen, Jan. 15, 2005. It uses LZ77. It takes no options. It uses about 2 MB memory for compression and about 900 KB for decompression.

brieflz 1.1.0 was last updated Sept. 23, 2015. To test, was compiled using the supplied Makefile (as blzpack) in the example subdirectory of the GitHub distribution using gcc 4.8.1 in Windows (note 26) and gcc 4.8.4 in Linux (note 48).

.3972 mtarimtari 0.2 is a free, open source (GPL v3) file compressor by David Werecat, Dec. 10, 2013. It is a multi-threaded bitwise order 17 context model with arithmetic coding. To test, it was compiled with MinGW gcc 4.8.0 with options -O2 -fopenmp..4068 lzflzf v1.00 (discussion) is a free, experimental file compressor by Ilya Muravyov, Oct. 29, 2013. It uses byte aligned LZ77 with a 8 KB window. Commands c and cx give faster or better compression, respectively.

lzf 1.01, Oct. 29, 2013, is a performance optimization with no change in compresion.

lzf 1.02 (discussion) was released Oct. 2, 2014.

.4092 sranksrank 1.1 is a free, open source file compressor by P. M. Fenwick, originally written Sept. 5, 1996 and last updated Apr. 10, 1997. It uses symbol ranking, like MTF (move to front) in BWT, but in order 3 contexts without a BWT transform. When a symbol is encountered it is encoded with 1, 3, or 4 bits according to its position in a queue of length 3, then moved to the front. Long runs of first place symbols are run length encoded using 12 bits to encode the length of the length of the run. A miss is coded using pseudo-MTF in an order-0 context using 7 bits for the first 32 symbols and 12 bits for the rest. It is pseudo-MTF because after a symbol is found it is swapped with another symbol about half way to the front, with some dithering. The algorithm is designed for speed rather than good compression.

The -C8 option selects the maximum number of contexts, 218. For this test, the C source code was compiled with MinGW 3.4.5:

.4106 QuickLZQuickLZ v0.1 is an open source (GPL) compression library designed for high speed by Lasse Mikkel Reinhold, Sept. 24, 2006. Tests were performed with demo.exe. Speed is I/O bound. Times shown are process times, but wall times can be 2-4 times greater. On enwik9 compression, the program reports “file too big”.

Version 0.9 (Oct. 22, 2006) is a faster version (quick.exe) which handles large (64 bit) files.

Version 1.20 (Mar. 15, 2007) is an archiver rather than a file compressor.

Version 1.30 beta (Apr. 16, 2007) has 4 modes (0-3) with 4 separate executables. Only version 3 (quick3.exe, max compression) was tested.

Version 1.30 (Aug. 14, 2007) modes 0, 1, and 2 are compatible with version 1.20, but mode 3 (best compression) is new.

Version 1.40 (Nov. 13, 2007) is an experimental version designed for better speed. It has only one mode.

.4165 stzstz 0.7.2 is a free, experimental file compressor by Bruno Wyttenbach, Feb. 15, 2011. It uses LZ77. It has 4 compression modes as shown in the table below. Times are process times. Real times are closer to 40-45 seconds. Memory is 3.3. MB for all compression modes and the same for decompression. Most of the memory is for I/O buffers (2MB each). The actual algorithm uses 48 KB. Modes -c and -c3 compress to the same size but the archives differ by 1 byte in the header. stz.exe zip size is 40,425.

stz 0.8, Mar. 4, 2011, improves compression and adds two new experimental modes. Compression and decompression process times in ns/byte are given below for both enwik8 and enwik9. Wall times are slower due to disk I/O. Modes -c, -c1, and -c2 select best compression speed, best uncompression speed, and best size respectively, but this appears only to hold for enwik8, probably because of disk I/O interference. Modes -c3, -c4, and -c5 produce identical archives. Additional changes are a Drag’n’drop interface, a CRC check (adds 2% to time), and more flexible command line interface. 5313_stz.zip size is 41,941.

.4246 compress

compress 4.3d is is the Windows version of the UNIX compress command, released Jan 18, 1990. It uses LZW and has no compression options.

### .4382 lzrw3-a

lzrw3-a is one of a series of public domain (open source) memory to memory compressors by Ross Williams in 1991. The programs were implemented as file compressors by Matt Mahoney on Feb. 14, 2008. The programs are as follows:

lzrw1 (Mar. 31, 1991) is byte-aligned LZ77 with a 12 bit offset and 4 bit length field allowing lengths 3-16. Each group of 16 phrases (pointers or literals) is preceded by 2 flag bytes to distinguish pointers from literals. Matches are found using a 4K hash table without confirmation which is updated after each phrase. It uses 16K of memory plus the input and output buffers.

lzrw1-a (June 25, 1991) is lzrw1 except that the length field represents values 3-18.

lzrw2 (June 29, 1991) replaces the offset with a 12 bit index into a rotating table of offsets, allowing the last 4K phrases (rather than 4K bytes) to be reached. The decompresser must reconstruct the phrase table (but not the hash table). It uses 24K memory plus buffers.

lzrw3 (June 30, 1991) replaces the 12 bit length field with a 12 bit index into the hash table. The decompresser must reconstruct the hash table. It uses 16K memory plus buffers.

lzrw3-a (July 15, 1991) uses a deep hash table (8 offsets per hash) with LRU replacement. It uses 16K memory plus buffers.

lzrw5 (July 17, 1991) uses LZW. The dictionary is implemented as a tree. It uses up to 384K memory plus buffers.

There is an experimental lzrw4, but it was never fully implemented.

All of the compression algorithms were originally implemented as memory to memory compression functions in C, not as complete programs. I wrote a driver program which divides the input into 1 MB blocks (except lzrw5), compresses them independently by calling the provided functions, and writing the compressed size as a 4 byte number followed by the compressed data. However, compression could be improved by using larger blocks at the cost of more memory. For lzrw5 the block size is 64K because the program is not guaranteed to work correctly for larger blocks. It did work on this benchmark for a 192K block size, but not for 256K. The distribution linked above uses a 64K block size.

.4473 fcm1fcm1 is a free, open source file compressor by Ilia Muraviev, May 23, 2008. It mixes order 0 and order 1 models and uses bitwise arithmetic coding as in fpaq0 and paq. The bit predictions are combined by weighted averaging, with the order 1 model weighted 15/16 unless the model is in its initial state, in which case the order 0 model prediction is used. Each context is mapped to 2 16-bit counters in initial state 1/2. One counter is updated by 1/8 of the prediction error and the other by 1/32. The model prediction is the average of these two values. The compressed file has a 4 byte header containing the file size.

.4581 runcoder1

runcoder1 is a free, open source (GPL) file compressor by Andrew Polar, Mar. 30, 2009. It uses an order 1 model with arithmetic coding. It takes no options. The program is available as source code (C++) only. For this test it was compiled with MinGW g++ 3.4.2 with options -O2 -march=pentiumpro -fomit-frame-pointer -s for 32-bit Vista as noted in note 26.

### .4598 data-shrinker

data-shrinker is a free, open source file compressor by Siyuan Fu, Mar. 23, 2012. It uses a LZ77 format similer to LZ4 for high speed. It takes no options. No executable was provided. To test, the source code was compiled with g++ 4.5.1 -O3 -s under 32 bit Windows and process times measured with output to nul:

.4638 lzwc

lzwc 0.3 is a free, open source (GPL) file compressor by David Catt, Jan. 15, 2013. It uses LZW with dictionary entries coded using 2 bytes. There is also a version 0.1 which produces identical compressed files but is not as fast. The program takes no options.

lzwc v0.7 fixes a bug in decompression of binary files, but does not change compressed size or speed. lzwc_bitwise is a version that uses less than 16 bits to encode symbols when the dictionary is small.

.4798 exdupe

exdupe v0.3.3 beta is a deduplicating archiver supporting full and incremental backups, under development by Lasse Reinhold, Oct. 20, 2011. When the beta phase ends, it will be a commercial program with source code available under restricted and non-permissive terms. Only 64 bit systems are supported. Partial source code is available for this version, although not for the compression and decompression code, which is derived from QuickLZ (LZ77). It was tested in Linux. A later version, 0.3.6 beta, was available only for 64 bit Windows on Oct. 30, 2012, and was not tested.

.4884 lzvlzv 0.1.0 is a free, experimental file compressor for Windows by Valéry Croizier, Jan. 1, 2014. It takes no options.

.4930 FastLZ

FastLZ is a free, open source compression library and file compressor by Ariya Hidayat, announced June 12, 2007 with no date or version number, and downloaded and tested on June 16, 2007. It uses byte-aligned LZ77. The software was released as source code only (in C). For this test it was compiled with MinGW gcc 3.4.5 as suggested by README.TXT (plus -s to strip debugging info):

6pack and 6unpack are the compressor and decompresser, respectively. They take no options. The compressed file name is stored without a path in the archive..4945 sharcsharc 0.9.6 beta is a free, open source (GPL v3) file compressor by Guillaume Voirin, Aug. 1, 2013. It uses dictionary coding. Option -c0 uses 1 pass and -c1 uses 2 passes for better compression.

sharc 0.9.10 was released Dec. 12, 2013.

sharc 0.9.11b, Dec. 14, 2013 has compression levels -c1 and -c2. -c0 selects no compression. -c1 selects dictionary encoding. -c2 selects LZP preprocessing followed by dictionary coding. The program uses the Density 0.9.12b compression library which is now a separate component.

.4975 flzp

flzp v1 is a free, open source file compressor by Matt Mahoney, June 18, 2008. It uses byte-oriented LZP. The input is divided into blocks such that at least 33 byte values never occur, or 64KB, whichever is smaller, then uses those bytes to code an end of block symbol plus match lengths from 2 up to the number of unused bytes - 1. A match length is decoded by finding the most recent context hash match in a 4 MB rotating buffer and outputting the bytes that follow. It uses a 1M hash table and an order 4 context hash. Each block begins with a 32 byte bitmap to distinguish symbols for matches from literals. flzp can be used as a preprocessor to a low order compressor like fpaq0 or ppmd -o3 to improve compression and speed.

### .5157 alba

alba 0.1 is a free, open source, experimental file compressor by xezz, Feb. 4, 2014, updated Feb. 5, 2014 to fix a bug in the “C” option. It uses byte pair encoding. The option c32768 selects the maximum block size. The default is 4096. It has an “optimal” compression mode “C”. It was tested in Linux by compiling with gcc 4.8.1 -O3.

alba 0.2, Feb. 6, 2014, adds extreme (e) mode. Modes c and C are unchanged.

alba 0.5.1, Feb, 18, 2014, adds dynamic block sizing (cd).

.5277 snappy

snappy 1.0.1 is a free, open source (Apache) compression library for Linux from Google, Mar. 25, 2011. It uses byte aligned LZ77, and is intended for high speed rather than good compression. Google uses snappy internally to compress its data structures for its search engine.

The compressed data contains tag bytes such that the low 2 bits indicate literals and matches as follows:

A literal of length 1 to 60 is encoded by storing the length - 1 in the upper 6 bits. Longer literals are coded by storing 60..63 in the upper 6 bits to indicate that the length is encoded in the next 1 to 4 bytes in little-endian (LSB first) format. This is followed by the uncompressed literals.

Matches of length 4 to 11 with offsets of 1 to 2047 are encoded using a 1 byte match. The match length - 4 is stored in the middle 3 bits of the tag byte. The most significant 3 bits of the offset are stored in the most significant 3 bits of the tag byte. The lower 8 bits of the offset are stored in the next byte. A match may overlap the area to be copied. Thus, the string “abababa” could be written using a literal “ab” and a match with an offset of 2 and length of 5. This would be encoded as:

Matches of length 1 to 64 with offsets of 1 to 65535 are encoded using a 2 byte match. The length - 1 is encoded in the high 6 bits of the tag byte The offset is stored in the next 2 bytes with the least significant bit first. Longer matches are encoded as a series of 64 byte matches with a final shorter match of 4 to 63. If the final part of the match is less than 4 then it is encoded as a 60 byte match plus a 4 to 7 byte match.

A 4 byte match allows offsets up to 232 - 1 to be encoded as with a 2 byte match. The decompresser will decode them but the compressor does not produce them because the input is compressed in 32K blocks such that a match does not span a block boundary.

The entire sequence of matches and literals is preceded by the uncompressed length up to 232 - 1 written in base 128, LSB first, using 1 to 5 digits in the low 7 bits. The high bit is 1 to indicate that more digits follow.

Compression searches for matches by comparing a hash of the 4 current bytes with previous occurrences of the same hash earlier in the 32K block. The hash function interprets the 4 bytes as a 32 bit value, LSB first, multiplies by 0x1e35a7bd, and shifts out the low bits. The hash table size is the smallest power of 2 in the range 256 to 16384 that is at least as large as the input string. As an optimization for hard to compress data, after 32 failures to find a match, the compressor checks only every second location in the input for the next 32 tests, then every third for the next 32 tests, and so on. When it finds a match, it goes back to testing every location.

As another optimization for the x86-64 architecture, copies of 16 bytes or less are done using two 64-byte assignments rather than memcpy(). To support this, if 15 or fewer bytes remain after a match then they are encoded as literals with no further search.

Snappy compresses from memory to memory rather than from file to file, so it was necessary to write a small test program (below), which was not included in the compressed size. The program loads the input into a string, compresses or decompresses it to a new string, and writes it to output. It gives the best possible compression but is not optimal for speed or memory. With this test, speed is 25 ns/byte for compression and 12 ns/byte for decompression (under 64 bit Linux). In a separate test (not shown), compressing in 32K chucks takes 9 ns/byte with very slightly larger size due to storing the size in each chunk. Decompression was not tested in this mode, but should be twice as fast. Memory usage for the test program is 2 GB to store the input and output, but actual memory usage by the library is at most 32K for the hash table.

The test program was compiled with g++ 4.4.5 -O3 in 64 bit Ubuntu Linux and linked to Snappy after running “./configure; make”. Use -DMODE=Compress or -DMODE=Uncompress to create a compressor or decompresser respectively.

.5322 bpebpe is a free, experimental file compressor by Philip Gage. It was published as source code only in “The C Users Journal” in Feb. 1994. It uses byte pair encoding. The input is divided into blocks which are iteratively compressed by finding the most frequent byte pair and replacing it with another byte value that never occurs in the block, until all of the unused bytes are used up or no pair occurs more than a minumum number of times.

For testing, I compiled with gcc 4.4.0 -s -O2 -march=pentiumpro -fomit-frame-pointer. I used the recommended compression options “5000 4096 200 3” and did not try to find a better combination. The options say to use a maximum block size of 5000, a hash table size of 4096 (it is recommended to be 5% to 20% smaller than the block size), a maximum of 200 different byte values per block, and do not replace pairs that occur less than 3 times.

### .5326 kwc

kwc (discussion) is a free GUI file compressor by sportman, Jan. 18, 2010. The input is divided into strings of 6 bytes each, and each value is replaced with a dictionary code. The dictionary size is not bounded, so usage increases with the size and randomness of the input. enwik9 uses 668 MB for compression and 333 MB for decompression.

### .5427 bpe2

bpe2 v1 is a free, experimental, open source (public domain) file compressor by Will, Jan. 15, 2010. It uses byte pair encoding. It divides the input into blocks of 8192 bytes which are compressed independently. A block is compressed by finding the byte pair which occurs most frequently and a byte value which never occurs in the block, and then substituing that byte value for each occurrence of the pair. The byte pair and its replacement are appended to the block as a 3 byte header. The process is repeated until either there are no unused byte values left, or there is no pair that occurs at least 4 times. The block is output with an additional 2 byte header to indicate its size.

bpe2 v2, Jan. 15, 2010, uses a faster algorithm to find the most frequent byte pair during compression.

bpe2 v3, Feb. 12, 2010, has some optimizations. (discussion)

The programs were tested by compiling with g++ 4.4.0 -O2 -s -march=pentiumpro -fomit-frame-pointer under Windows Vista on a 2.0 GHz T3200.

.5586 fpaq0f2fpaq is a free, experimental command line file compressor with source code (in assembler) by Nikolay Petrov, Feb. 20, 2006. It is a faster implementation of fpaq0 by Matt Mahoney (Sept. 3, 2004) maintaining archive compatibility. fpaq is an order-0 arithmetic coder which models independent, identically distributed (i.i.d.) characters, and is not intended as a general purpose compressor. Its purpose is to test the efficiency of different arithmetic coding algorithms. There are several variants.

fpaq0 uses a 32-bit carryless arithmetic coder to code binary decisions and output one byte at a time. fpaq1 uses a 64 bit coder. fpaq0b uses a 32 bit coder but counts carries and outputs a bit at a time to achieve greater internal precision. fpaq0s improves on fpaq0b by using the compressed EOF to encode the uncompressed EOF, unlike the other models which code an extra bit for each byte to indicate the end. fpaq02 extends this idea to 64 bits. All programs except fpaq are C++ source code and compiled as follows with MinGW 3.4.2 (where %1 is the program name):

fpaq0p by Ilia Muraviev, Apr. 15, 2007, uses an adaptive order 0 model. Instead of keeping a 0,1 count for each context, it keeps a probability and updates it by adjusting by 1/32 of the error. This is faster because it avoids a division instruction.

fpaqa by Matt Mahoney, Dec. 15, 2007, is the first implementation of Jarek Duda’s asymmetric binary coder, described in section 3 of Optimal encoding on discrete lattice with translational invariant constrains using statistical algorithms, 2007.

The model is based on fpaq0p (adaptive order 0), but with probabilities modeled with 16 bits resolution (instead of 12) to improve compression. The source (GPL) can be compiled with -DARITH to substitute the arithmetic coder from fpaq0 and fpaq0p for the asymmetric coder.

An asymmetric coder has a single N-bit integer state variable x, as opposed to two variables (low and high) in an arithmetic coder, which allows a lookup table implementation. In fpaqa, N=10. A bit d (0 or 1) with probability q = P(d = 1) (0 < q < 1, a multiple of 2-N) is coded:

To decode, given x and q

x is maintained in the range 2N to 2N+1-1 by writing the low bits of x prior to encoding d and reading into the low bits of x after decoding. Because compression and decompression are reverse operations of each other, they must be performed in reverse order. The encoder divides the input into blocks of size B=500K bits, saves the predictions (q) in a stack, then encodes the bits in reverse order to a second stack. The block size and final state x are then written, followed by the compressed bits in the second stack in reverse order that they were coded. The decompresser runs everything in the forward direction, reading the saved x at the beginning of each block.

To reduce the size of the coding tables, q is quantized to R=7 bits on a nonlinear scale with closer spacing near 0 and 1. The quantization is such that ln(q/(1-q)) is a multiple of 1/8 between -8 and 8.

In the source, N, R, and B are adjustable parameters up to N=12, R=7. Larger values improve compression at the expense of speed and memory. fpaqa uses 2N+R+2 + 5*B/4 bytes for compression and 2N+R+1 bytes for decompression.

fpaqb (Matt Mahoney, Dec. 17, 2007, updated to ver 2 on Dec. 20, 2007) is a revision of fpaqa, using the same model, but using an asymmetric coder that uses direct calculations in place of lookup tables to update the state. This allows higher precision to improve compression (eliminating a 0.03% penalty), saving memory, and allowing bytewise I/O (x in range 2N to 2N+8-1 for N=12). Compression is about the same speed as fpaqa but decompression is 28% faster. Ver. 2 is faster but maintains archive compatibility with ver. 1.

fpaq0m by Ilia Muraviev, Dec. 20, 2007, uses arithmetic coding and 2 order 0 models averaged together, one with fast update (rate 1/16) and one slow (1/64).

fpaq0mw by Eugene Shelwien, Dec. 21, 2007, modifies fpaq0m by using a weighted mix of a fast (1/16) and slow (1/256) adapting order 0 model, where the weight is adjusted dynamically to favor the better model.

fpaqc (Matt Mahoney, Dec. 24, 2007) is fpaqb with some optimizations to the asymmetric coder.

fpaq0pv2 (Ilia Muraviev, Dec. 26, 2007) is a speed optimized version of fpaq0p with arithmetic coding.

fpaq0r by Alexander Rhatushnyak, Jan. 9, 2008, is an order 0 model with arithmetic coding. The model is tuned for better text compression. When compiled with -DSLOWER (fpaq0rs.exe), the arithmetic coder uses higher precision for better compression with a small speed penalty.

fpaq0f by Matt Mahoney, Jan. 28, 2008, uses an adaptive order 0 model which includes the bit history (as an 8 bit state) in each context. (It is controversial whather this is really “order 0”). It uses arithmetic coding with 16 bit probabilities (rather than 12 bits).

fpaq0f2 by Matt Mahoney, Jan. 30, 2008, uses a simplified bit history consisting of just the last 8 bits, plus some minor improvements.

fpaq0pv3 by Nania Francesco Antonio, Apr 04, 2008, is compatible with fpaq0p but 20-30% faster.

fpaq0pv4 including fpaq0pv4nc and fpaq0pv4nc0, are speed optimizations by Eugene Shelwien, Apr. 6, 2008, as discussed here. fpaq0pv4 is compatible with fpaq0p but faster. The nc and nc0 variants dispense with the extra EOF flags in each byte.

fpaq0pv5 by Nania Francesco Antonio, Apr 6, 2008, is a modification to fpaq0pv4.

fpaq0pv4a including fpaq0pv4anc and fpaq0pv4anc0 are bug fixes to fpaq0pv4 by Eugene Shelwien, Apr. 7, 2008, as discussed above.

fpaq0pv4b by Eugene Shelwien, Apr. 18, 2008, replaces the arithmetic coder with sh_v1m port (uses carries), Windows I/O, and other optimizations as discussed here. The Intel-compiled .exe only runs on Intel machines. I tested fpaq0pv4b1 which was patched on May 19, 2008 to run on AMD machines.

### .5793 ppp

ppp is the public domain file compressor specified in RFC 1978 for datagram compression using the Point-to-Point Protocol. The RFC includes an implementation in C written by Dave Rand with modifications by Ian Donaldson and Carsten Bormann, published in Aug. 1996. The program uses order-4 symbol ranking with a queue length of 1 with a 64K hash table without collision detection. Match flags are packed 8 to a byte, followed by up to 8 literals for each incorrect guess. The 16 bit context hash is updated by shifting left 4 bits and XORing with the current byte. The program reads from a file and outputs to stdout like this:

The original code opens both files in text mode, which does not work in Windows. For testing, I modified 3 lines of code to open the input and output files in binary mode as follows:

I compiled using gcc 3.4.2 -O3 -fomit-frame-pointer -march=pentiumpro and packed with UPX (linked above, Feb. 11 2008). Times are wall times. I did not use timer 3.01 because its output would be redirected to the output file. Process times are about 50% of wall time based on watching Task Manager..5805 kscksc (keyword shuffle compressor) is a free, experimental file compressor for Windows by Sportman, Feb. 13, 2014. It uses symbol ranking of 1, 2, 3, or 4 byte fixed length strings (user selected) encoded from a move-to-front queue with dictionary entries near the front encoded with half the bits of the maximum pointer length. Decoding is in reverse order and therefore requires reading the whole file into memory. Thus, decompression requires more memory than compression, depending on the file size. The option 1..4 selects the string length.

The program uses a Windows GUI when run with no arguments. It was tested with command line arguments under Wine 1.6 in Ubuntu.

.5902 lzbw1lzbw1 0.8 is a free, command line file compressor by Bruno Wyttenbach, Apr. 26, 2009. It uses LZP and is derived from LZP2. It takes no options..5981 lzp2lzp2 0.1 is a free file compressor by Yann Collet, Apr. 17, 2009. It uses LZP. There are no compression options. There is a smaller, separate program (unlzp2) that only decompresses.

lzp2 0.7c was released Oct. 10, 2009. Run times are dominated by disk access, not included below.

.6368 NTFS

NTFS disk compression is used in Microsoft Windows when the “compress files to save disk space” checkbox is checked in the folder properties dialog box. Disk compression was introduced in NTFS v1.2 in mid 1995 according to Wikipedia. The compression format is called LZNT1. The algorithm is propretary. However, it was reverse engineered (in Russian, see also here). The algorithm is LZSS (similar to lzrw1). The format consists of groups of 8 symbols each preceded by 8 flag bits packed into a byte. A 0 bit indicates a literal symbol, which is decoded by copying it. A 1 bit indicates a 2 byte offset-length pair which is decoded by going back ‘offset’ bytes in the output and copying the next ‘length’+3 bytes. An offset-length pair uses a variable number of bits allocated to the offset (from 4 to 12) depending on the position in the file, and any remaining bits allocated to the length of the match. A 12 bit offset would correspond to a 4 KB block on disk.

I tested by copying enwik9 between folders with the compression turned on in one folder, and compared with times to copy between two folders both with compression turned off. I tried each copy twice and took the second time, which was at most 1 second faster than the first copy. I used the test machine in note 26 running Windows Vista Home Premium SP1 32 bit with 3 GB memory and a 200 GB disk between folders on the same partition. Copying between two uncompressed folders takes 41 seconds. Copying to a compressed folder takes 51 seconds, or a difference of 10 seconds. Copying from a compressed folder takes 35 seconds. I estimated 9 seconds for decompression by assuming that copying the compressed file directly would take 26 seconds based on its size of 636 MB. (This is probably wrong because the file would be cached in memory uncompressed, but the alternative is a negative time for decompression. Copying either the compressed or uncompressed file to NUL: takes 2 seconds on the second try).

Times were recorded with a watch because timer 3.01 will not time built-in commands like ‘copy’. Task Manager does not show any processes consuming CPU time or memory during copying. However, memory use should be insignificant (under 16 KB) for LZSS with 4 KB blocks. Sizes are as reported by right clicking on the compressed file in Explorer as ‘size on disk’. The size of the decompression program is not known.

### .6373 shindlet

shindlet (mirror) is a series of 3 free command line file compressors by Piotr Tarsa. All are order-0 arithmetic coders with identical models written in assembler (included). The three variants are fs (frequency sorting), bt (binary tree), and sl (linear search). All three produce identical sized compressed files. In addition, the compressed output of bt and sl are identical. Results for all 3 variations are below. Comp and Decomp show global times including disk I/O in ns/byte, with CPU (process) times in parenthesis. Date is the latest program timestamp in the distribution, not the release date.

.6445 arb255arb255 is a free, experimental command line file compressor with source code availalbe by David A. Scott, July 28, 2004. It is a bijective order-0 arithmetic coder, best suited for i.i.d. bytes (like fpaq). It takes no arguments except the input and output filenames. The decompresser is unarb255.exe..6483 compact

compact (man page) is a file compressor by Colin L. Mc Master, Feb. 28, 1979. It was written in K&R C for VAX/PDP11 and SUN under Berkeley UNIX. It uses adaptive order-0 Huffman coding. The (separate) decompression program rebuilds the Huffman tree, so it need not be transmitted.

Neither program takes options. compact deletes the input file and creates an output file with a .C extension. uncompact deletes the compressed file and restores the original. compact was later superceded by compress, which gives better compression.

For this test, compact was compiled using the provided Makefile and tested under Ubuntu Linux. Minor source code corrections were needed to compile under gcc. However, the decompresser size is based on the original code. A port to Windows would be possible but would require more source code changes.

### .6942 TinyLZP

TinyLZP is a free, open source (GPL v3) file compressor by David Werecat, Oct. 12, 2012. It uses LZP and takes no options. The first entry is compiled from source using “cl /O2 tinylzp.c /I.” using Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 16.00.30319.01 for 80x86 and tested on a 2.0 GHz T3200 under 32 bit Vista. The second entry, TinyLZP-x86-SSE2.exe, is supplied and requires MSVCR110.dll (Visual Studio 2012 C++ runtime) to run.

.6955 smile

smile (Nov. 5, 2004) and smile256 (Dec. 5, 2004) (discussion) are free, open source file compressors by Andrei Frolov. These programs are unique for their small executable size. smile consists of two programs: a 250 byte compressor, smile_e.com and a 207 byte decompresser, smile_d.com. smile256 is both a compressor and a decompresser in 256 bytes. This includes code to parse the command line and open the input and output files. Source code is in 16 bit assembler for DOS. Program size is given for the uncompressed .com files because zip makes them larger.

Both programs use a move-to-front algorithm with the queue position encoded using an interleaved Elias Gamma code. The position of the current byte in the queue (1..256) is encoded by dropping the leading 1 bit, preceding each of the remaining bits with a 0 bit, then terminating with a 1 bit. After encoding, the byte value is moved to the front of the queue. smile256 also encodes EOF as 257, resulting in a file that is usually 1 byte larger than smile_e.

.7594 barfbarf is a free, open source file compressor by Matt Mahoney, Sept. 21, 2003. It was written as a joke to debunk claims of recursive compression. The algorithm is as follows:

1. If the input is one of the 14 files of the Calgary corpus, the output is coded as 1 byte to indicate which file.
2. If not, then the input is compressed with a byte oriented LZ77 code, in which bytes 0-31 code a literal of that length, and 32-255 code a match of length 2 and offset 0-223.
3. If step 2 does not compress, then the first byte is removed and a filename extension is added to encode that byte.

The main table shows the size and total process time after 2 compression passes. Further passes will “compress” by one byte. The decompresser source code size includes the Calgary corpus, which is needed to build the executable. (barf.exe is 1,009,274 bytes after packing with UPX and zip). Results by pass are shown below. Times are process times (Timer 3.01) with actual wall times in parenthesis.

A similar program, barfest.exe, compresses the million random digits file to 1 byte, rather than the Calgary corpus. The decompresser size is 455,755 bytes (zipped).

### .9956 arb2x

arb2x v20060602 is a free, experimental command line file compressor with source code availalbe by David A. Scott, updated June 2, 2006. It is a bitwise bijective order-0 arithmetic coder, best suited for i.i.d. bits. It takes no arguments except the input and output filenames. The decompresser is unarb2x.exe.Failed and Pending Testshipp

hipp v0.5819 is an experimental command line file compressor with source code available by Bogatov Roman, Aug. 19, 2005. It uses context mixing with ordinary and optionally sparse (fixed gap) contexts, using a suffix tree with path compression to store statistics. The options are /m to specify the memory limit in MB (default /m2048), /o to specify primary context order, i.e. the depth of the suffix tree with path compression (default /o256), /do to set max deterministic order (actual order with path decompression) (default /do256, do >= o), /so to set the number of sparse contexts (default /so0). Sparse contexts are useful for binary data but generally not text. Memory usage increases with the size of the file and with /o and /so (but not /do). Also, if the memory limit is exceeded then an error occurs. Unfortunately enwik9 cannot be compressed at all because initialization requires more than 800 MB. Some results for enwik8:

Zipped size: C++ source (commented in Russian) = 98,765, exe = 36,724.ppmz2

ppmz2 v0.81 is a free, experimental, open source file compressor by Charles Bloom, May 9, 2004. It uses PPM. It takes several compression options but only the defaults were tested. Memory usage grows as the program runs. On enwik9 it runs out of memory.

### XMill

XMill 0.8 is an open source command line XML preprocessor/compressor by AT&T, written by Dan Suciu, Hartmut Liefke, and Hedzer Westra in March, 2003. It works by sorting by XML tags to bring similar content together, then compressing with gzip, bzip2, or ppmd. Optionally it can (in theory) output the preprocessed data as input to another compressor.

Unfortunately, the compressor will not accept truncated XML files such as this benchmark. It can be made to work by appending the following 38 bytes to enwik8 or enwik9 to create a properly formed XML file (a trailing newline is optional but was not used):

However, decompression succeeds for enwik8 but fails for enwik9. (Failed values in parenthesis, timed for enwik8). The decompresser (xdemill) reports “corrupt file”.

The -w option preserves whitespace. Otherwise compression is lossy. -P selects ppmdi compression (bzip2, gzip and no compression are also available). -9 selects maximum compression. -m800 allows 800 MB of memory.

In theory, using no compression (-N) would allow XMill to be used as a preprocessor to other compressors. However, the decompresser will not accept either enwik8 or enwik9 (with closing tags appended) if processed with -N (reports “corrupt file”).

xmill 0.9.1 (Mar. 15, 2004) also fails to decompress enwik9 and fails to decompress either file with -N.

### lzp3o2

lzp3o2 (LZP 3 with order 2 literal coding) is one of a family of open source file compressors by Charles Bloom, originally written in 1995. The algorithm is described in a paper submitted to DCC’96. lzp3o2 uses LZP compression with order 2 modeling of literals and arithmetic coding. The tested version of the source code is dated Aug. 25, 1996 and compiled for Windows Oct. 10, 1998. The compiled distribution from here was tested.

All programs report “malloc failed” on enwik9. The LZP algorithms use very little memory themselves, but these implementations allocate input and output buffers all at once. This fails for enwik9 because of the 2 GB process limit in Windows.

lzp1 is both a compressor and decompresser. To decompress, use -d as the third argument. lzp2 is a compressor only. There is a source code decompresser “lzp2d” but I was unsuccessful in compiling it. It allows an unexplained option “HuffType” which I did not experiment with. lzp3o2 has a separate decompresser “lzp3o2d.exe” included in the distribution.

## History

This page is maintained by Matt Mahoney, mattmahoneyfl (at) gmail (dot) com.

:D 一言句子获取中...