<< ctxmodel.net

> I think authors of LZ-based programs aim on fast decompression,
> or, at least, keep asymmetric nature of their programs.

And I think they're just lazy to learn something new, and just keep themselves busy with already known things, which is always possible because perfection is unreachable.
I mean that LZ decoding wins in speed only until certain level (which is around decoding speed of rar), and after that the statistical approach allows for better results, of course at the cost of extra programming.

> For example, I can add more stronger and advanced LZ-output
> encoding, but at the cost of 4X-5X slower decompression...

Would you bet that nobody is able to add a stronger model to your compressor, while keeping the same (or better) speed?
Also, as I said, LZ has some drawbacks which are very troublesome to work around, like redundancy (the possibility of decoding different match sequences into same data) and hidden data correlations, so its inefficient to further improve LZ's compression after some point. Btw, that applies to BWT as well, though at least it doesn't have the alternative coding redundancy, but imho it has even smaller area of effective application than LZ.

> Not really worth it in my opinion. At the same time
> I consider current coding scheme as an advanced one...

It is only advanced comparing to LZH, but not much so considering even already known techniques. And anyway I'd advice to concentrate on speed optimizations, if you want to keep it LZ. There're usually a lot of algorithmic optimizations applicable in arithmetic coding, like removing multiplications by using logarithmic counters (though that might be patented).


2013-07-27 20:14:03                 >
2014-11-26 19:53:44                 >
2015-01-11 18:33:31                 >

Write a comment:

Name: