用户:HTinC23/通用编码
此用户页目前正依照en:Universal code上的内容进行翻译。 (2020年1月27日) |
数据压缩理论中,通用编码是正整数的某种前缀编码。这样的编码是将正整数映成二进制串的映射,并要满足额外的条件:不论正整数讯号源具有怎样的概率分布 p, 只要其单调递减(即 p(i ) ≥ p(i + 1) 对任意的 i 成立),则码词(编码所得的二进制串)的期望长度与 p 的最优编码相比至多差常数倍。若一种通用编码的码词期望长度与最优期望长度之比为熵 H 的有界函数 f(H), 且在 H 趋向正无穷时,f(H) 趋向于 1, 则称该通用编码为渐近最优。
一般而言,正整数的前缀编码会把较长的码词赋予较大的正整数。假如已知可能要发送的讯息的列表,则可利用此种编码有效地沟通,因为可以先将该列表内的讯息按概率递减排序,然后仅发送相应的正整数序号。倘已确切知道讯号源的概率分布,则可考虑不采用通用编码,而视具体情况采用更合适的编码。
通用与非通用编码例子[编辑]
下列为正整数的通用编码:(星号 (*) 表示可以These are some universal codes for integers; an asterisk (*) indicates a code that can be trivially restated in lexicographical order, while a double dagger (‡) indicates a code that is asymptotically optimal:
- 以利亚加玛码 *
- 以利亚戴尔达码 * ‡
- Elias omega coding *[需要更深入解释] ‡
- 指数哥伦布码 *, 非负整数 n 的 0 阶指数哥伦布码就是 n+1 的以利亚加玛码。(在 H.264/MPEG-4 AVC 用到)
- 斐波那契编码
- 莱文斯坦编码 * ‡, 原来的 [1]
- Byte coding where a special bit pattern (with at least two bits) is used to mark the end of the code — for example, if an integer is encoded as a sequence of nibbles representing digits in base 15 instead of the more natural base 16, then the highest nibble value (i.e., a sequence of four ones in binary) can be used to indicate the end of the integer.
- Variable-length quantity
亦有下列非通用编码:
Their nonuniversality can be observed by noticing that, if any of these are used to code the Gauss–Kuzmin distribution or the Zeta distribution with parameter s=2, expected codeword length is infinite. For example, using unary coding on the Zeta distribution yields an expected length of
On the other hand, using the universal Elias gamma coding for the Gauss–Kuzmin distribution results in an expected codeword length (about 3.51 bits) near entropy (about 3.43 bits)[2][永久失效链接].
Relationship to practical compression[编辑]
Huffman coding and arithmetic coding (when they can be used) give at least as good, and often better compression than any universal code.
However, universal codes are useful when Huffman coding cannot be used — for example, when one does not know the exact probability of each message, but only knows the rankings of their probabilities.
Universal codes are also useful when Huffman codes are inconvenient. For example, when the transmitter but not the receiver knows the probabilities of the messages, Huffman coding requires an overhead of transmitting those probabilities to the receiver. Using a universal code does not have that overhead.
Each universal code, like each other self-delimiting (prefix) binary code, has its own "implied probability distribution" given by p(i)=2−l(i) where l(i) is the length of the ith codeword and p(i) is the corresponding symbol's probability. If the actual message probabilities are q(i) and Kullback–Leibler divergence DKL(q||p) is minimized by the code with l(i), then the optimal Huffman code for that set of messages will be equivalent to that code. Likewise, how close a code is to optimal can be measured by this divergence. Since universal codes are simpler and faster to encode and decode than Huffman codes (which is, in turn, simpler and faster than arithmetic encoding), the universal code would be preferable in cases where DKL(q||p) is sufficiently small. [3]
For any geometric distribution (an exponential distribution on integers), a Golomb code is optimal. With universal codes, the implicit distribution is approximately a power law such as (more precisely, a Zipf distribution). For the Fibonacci code, the implicit distribution is approximately , with
where is the golden ratio. For the ternary comma code (i.e., encoding in base 3, represented with 2 bits per symbol), the implicit distribution is a power law with . These distributions thus have near-optimal codes with their respective power laws.
External links[编辑]
- Data Compression, by Debra A. Lelewer and Daniel S. Hirschberg (University of California, Irvine)
- Information Theory, Inference, and Learning Algorithms, by David MacKay, has a chapter on codes for integers, including an introduction to Elias codes.
- Кодирование целых чисел has mostly English-language papers on universal and other integer codes.
{{Compression Methods}}
[[Category:Data compression]]
[[Category:Lossless compression algorithms]]