cbtpc - Convert PGM or PPM file to compressed BTPC.
dbtpc - Convert compressed BTPC to PGM or PPM.
Their usage is:
cbtpc input_pic output_file [quality]
dbtpc input_file output_pic
The optional quality parameter is a number between 0 and 100. Default is
Quality values roughly approximate JPEG quality values, but 100 is lossless.
This version only reads and writes PGM and PPM raw format files (magic number
"P5" or "P6").
The encoder and decoder share a common data structure for the pyramid, a lookup-table implementation of the predictor, a compactor data type for the huffman coders. They share a large amount of code for traversing the pyramid.
The image pyramid is not separated into bands. Rather the values for each point in the pyramid (either reconstructed pel values, or prediction errors) are kept at the appropriate coordinates in a picture-sized array. Encoder and decoder traverse the pyramid by subsampling this array with the appropriate mesh of samples for the current level. BTPC 2 does not process bands strictly from the coarsest to the finest. Rather it handles bands in pairs, coding two siblings from a "square-sampled" band followed immediately by their four children in the next finer "diamond-sampled" band. The reason for this is simply time efficiency, to reduce the number of accesses to the pyramid array.
The predictor is implemented as a lookup table which converts the signs of pel differences around the predictor square into shape indices. These are interpreted to form shape-based predictions as described earlier.
The compactor data type is a translation of code from Unix compact into a C++ class. Several coders are used simultaneously; these are independent objects of type compactor. In the evaluation code, the statistics are not updated for every input sample. Rather, only every fourth sample is added to the symbols counts. The reason for this is that updating the Huffman tree is relatively time consuming, and there is little compression cost in only taking every fourth sample. Experiments have shown that one-in-four sampling gives the best trade-off between reduced computation time and compression efficiency.
Data modeling in the encoder takes place in three phases. First, starting from the coarsest subsampled band, L8, the encoder generates the prediction difference pyramid from H8 to H1. Next, beginning at the finest difference band, H8, it propagates zeros backwards through the binary tree to place the leaf codewords. Finally it DPCM-codes L8, then writes it, followed by H8 to H1, via the entropy coders, to the compressed file. Flags are used to ensure that none of the descendants of a leaf pel are written out: they are implicitly zero in the coded output.
The decompression of binary tree coded images requires only one pass. Following entropy decoding, the coarsest band is first recovered and written immediately to the appropriate pels in the output image. Each successive band is then received and the corresponding grid of pels recovered and inserted in the image. Receipt of a leaf codeword causes the current pel to be set to zero, and its children pels set to a flag value. When the children's band is subsequently processed, the flagged pels are reset to zero and their children are marked with flags. No input data is read for the flagged pels. In this way, leaf codewords cause zeros to be propagated from one level to the next thereafter, without requiring additional memory.
No effort has been made to optimize the unshared code of the encoder. However, decoder optimization has been given some attention. Both encoding and decoding take longer than the Independent JPEG Group's version 5, compressing to a similar level, though the decoder difference is quite small for low data rates. The BTPC bottleneck is in the adaptive Huffman decoding, which must walk a tree for every input symbol. Work is in progress to develop an alternative approach to Huffman coding (perhaps per-band static Huffman coding) which will allow very fast decoding
Next (Results and discussion) | Previous (Coder design) | Top (BTPC)