<< ctxmodel.net

> But, when I changed bit model, compression becomes worser or better.
> This is the answer why I'm looking for new ideas about counters.

Just think about it. Counter is a model for some subsequence of data bits. So you can extract that sequence and make a separate model for it, then you'd know the performance limit for a counter working on this sequence.

> I think, SSE needs completely different bit model for it's entries.
> Maybe different model for each entries (eg. different adaptation speed).

SSE has two basic applications:

1. Probability stabilization. Obviously, any context has less statistics than whole data, so it may assign too small probabilities to some rare symbols. But improvement from fixing that is usually insignificant. Though it might become significant (like in paq) if we'd knowingly use a rough model for some reason.

2. Context clustering. There're usually too many relevant contexts, so we cannot really collect the full dependencies. Instead, we can make a few models with different context subsets and combine their predictions. But there's this other way too: we can merge statistics of some contexts if we know they're similar. And the best way to measure the actual similarity of two contexts is to compare their histories - contexts with similar histories make similar predictions, so they can be merged. Then, a probability is a function of a context history, so it can be used as a context similarity measure, that's why SSE allows to expand contexts and significantly improve compression. But there's no reason to think that a better method doesn't exist, eg. using directly the context histories.

2013-08-15 05:32:04                 >
2014-11-27 06:13:20                 >
2015-01-12 05:27:23                 >

Write a comment: