Context Modeling is a divide-and-conquer method based on separation of data into
subsequences by context, so that each subsequence can be approximated with
a simple model (usually memoryless) while still providing a good overall precision.
Most widely used CM subclass is PPM, which concentrates on a single context
model due to performance considerations and switches to other contexts only
if the main model fails. This allows PPM compressors to keep competitive speed at
the cost of some prediction imprecision showing as redundancy. Another known subclass
is Context Mixing, which linearly combines the predictions of several submodels.
More complex schemes with secondary models using the primary predictions as context
seem to remain anonymous.
Then, there's yet another approach which also approximates complex data with simple model
but by a static data transformation (LZ, Block Sorting, Symbol Ranking).
Strange as it may seem, CM too is only a speed/redundancy tradeoff stage,
as an ultimate modelling method is to find a function which generates given data.
There're even some practical applications for this in the cases with known source model,
then parameters can be determined by maximum likelihood.
3277 View(s),
16 Comment(s)
|