User:HTinC23/通用編碼

维基百科,自由的百科全书
Fibonacci, Elias Gamma, and Elias Delta vs binary coding
Rice with k = 2, 3, 4, 5, 8, 16 versus binary

数据压缩理論中,通用編碼是正整數的某種前綴編碼。這樣的編碼是將正整數映成二進位串的映射,並要滿足額外的條件:不論正整數訊號源具有怎樣的概率分布 p, 只要其單調遞減(即 p(i ) ≥ p(i + 1) 對任意的 i 成立),則碼詞(編碼所得的二進位串)的期望長度與 p最優編碼相比至多差常數倍。若一種通用編碼的碼詞期望長度與最優期望長度之比為 H 的有界函數 f(H), 且在 H 趨向正無窮時,f(H) 趨向於 1, 則稱該通用編碼為漸近最優

一般而言,正整數的前綴編碼會把較長的碼詞賦予較大的正整數。假如已知可能要發送的訊息的列表,則可利用此種編碼有效地溝通,因為可以先將該列表內的訊息按概率遞減排序,然後僅發送相應的正整數序號。倘已確切知道訊號源的概率分布,則可考慮不採用通用編碼,而視具體情況採用更合適的編碼。

通用與非通用編碼例子[编辑]

下列為正整數的通用編碼:(星號 (*) 表示可以These are some universal codes for integers; an asterisk (*) indicates a code that can be trivially restated in lexicographical order, while a double dagger () indicates a code that is asymptotically optimal:

亦有下列非通用編碼:

Their nonuniversality can be observed by noticing that, if any of these are used to code the Gauss–Kuzmin distribution or the Zeta distribution with parameter s=2, expected codeword length is infinite. For example, using unary coding on the Zeta distribution yields an expected length of

On the other hand, using the universal Elias gamma coding for the Gauss–Kuzmin distribution results in an expected codeword length (about 3.51 bits) near entropy (about 3.43 bits)[2][永久失效連結].

Relationship to practical compression[编辑]

Huffman coding and arithmetic coding (when they can be used) give at least as good, and often better compression than any universal code.

However, universal codes are useful when Huffman coding cannot be used — for example, when one does not know the exact probability of each message, but only knows the rankings of their probabilities.

Universal codes are also useful when Huffman codes are inconvenient. For example, when the transmitter but not the receiver knows the probabilities of the messages, Huffman coding requires an overhead of transmitting those probabilities to the receiver. Using a universal code does not have that overhead.

Each universal code, like each other self-delimiting (prefix) binary code, has its own "implied probability distribution" given by p(i)=2l(i) where l(i) is the length of the ith codeword and p(i) is the corresponding symbol's probability. If the actual message probabilities are q(i) and Kullback–Leibler divergence DKL(q||p) is minimized by the code with l(i), then the optimal Huffman code for that set of messages will be equivalent to that code. Likewise, how close a code is to optimal can be measured by this divergence. Since universal codes are simpler and faster to encode and decode than Huffman codes (which is, in turn, simpler and faster than arithmetic encoding), the universal code would be preferable in cases where DKL(q||p) is sufficiently small. [3]

For any geometric distribution (an exponential distribution on integers), a Golomb code is optimal. With universal codes, the implicit distribution is approximately a power law such as (more precisely, a Zipf distribution). For the Fibonacci code, the implicit distribution is approximately , with

where is the golden ratio. For the ternary comma code (i.e., encoding in base 3, represented with 2 bits per symbol), the implicit distribution is a power law with . These distributions thus have near-optimal codes with their respective power laws.

External links[编辑]


{{Compression Methods}} [[Category:Data compression]] [[Category:Lossless compression algorithms]]