Well, so here's the result: Mix v0 and the detailed benchmark is there.
Its basically just another mixer demo, starting with
mix(mix(mix(o0,o0'),mix(o1,o1')),mix(o2,o2'))and lamely hashed higher orders (24M per order) are mixed over that. These are not exactly the traditional CM orders though, because the optimizer masked out half of the bits.
> But even before doing it I know that both its speed and quality
> would be nothing special.
...And now we've got a coder comparable to rar and ppmd in compression, but at 10x slower speed (that can be somewhat fixed though): as expected.
In conclusion, here's what has to be done to reach the compression level of CCM/LPAQ/PAQ9.
2008-07-05 11:54:26 osmanturan >
> Match model is a must, it seems
I don't think so for PAQish match model. toffer has good example of about with match model vs without match model. Look at this:
Surely, a better model can be made. Maybe, we can ask toffer for it's details. He really likes to share his experiences with any other people. Not like the others :-)
2008-07-05 12:08:26 toffer >
By accident i found that you've also posted things, which didn't appear at encode.ru, maybe private messaging, etc... Would you mind to post this stuff in the forum and maybe create a thread? It would be better if things don't spread around.
2008-07-05 12:44:32 inikep >
> at 10x slower speed (that can be somewhat fixed though)
How much improvement do you think is possible?
> only o2mix would be able to compete with CCM
So we still don't know from CCM speed comes.
> But further development of this coder is not really reasonable.
I think that now is a good time to start discussion about a CCM/LPAQ.
Thanks for accepting my "challenge".
2008-07-05 21:29:31 Shelwien >
> > at 10x slower speed (that can be somewhat fixed though)
> How much improvement do you think is possible?
1. A match model would allow to avoid all this mixing
for 50-70% of data. (2x)
2. Dual o0,o1,o2 may be not that useful with higher orders
3. A single o2 instance takes 16M*(4+2)=96M. It should be
hashed instead (1.2-1.5x)
4. All counter arrays can be reduced in half by removing
the access counters (along with dynamic update speed which
has effect only at coding start) (1.2-1.5x)
5. File i/o and streamed E8 (1.1x)
6. Formal speed optimizations (eg. using the template
scripts which I made for fpaq0pv4B). (1.1x)
7. Division in the mixer can be replaced with a table lookup.
8. "Precise" rc with 64bit multiplication is slow too.
9. A compressor can store counter predictions for the
block instead of mixing them right away, so an asymmetric
implementation is possible.
> > only o2mix would be able to compete with CCM
> So we still don't know from CCM speed comes.
I'm still not really impressed, in fact.
PPMd has the same speed, and no filters.
Guess I should benchmark durilca'light.
> > But further development of this coder
> > is not really reasonable.
> I think that now is a good time to start discussion
> about a CCM/LPAQ.
Maybe. I'm especially interested in the paq9 results with
my finnish dictionary -
http://ctxmodel.net/files/MIX/mix_v0.htm ... and overall
results distribution for that file.
> Thanks for accepting my "challenge".
The actual programming took like half an hour at most,
its just a (automatic) parameter optimization which
needed a day.
Btw, I made a new optimization target - concatenated
parts of each SFC file, and started o5 and o6 tuning
again. It might be interesting, because the previous
optimization was incremental (eg. o3mix used optimized
parameters from o2mix and only o3 parameters was
actually tuned in it... and so on), and this time _all_
the parameters are optimized by the same metric.
> By accident i found that you've also posted things, which
> didn't appear at encode.ru, maybe private messaging,
> etc... Would you mind to post this stuff in the forum and
> maybe create a thread? It would be better if things don't
> spread around.
Well, I want to experience maintaining my own site and
also I'm trying to make a blog engine to my liking,
and its reasonable to take advantage of the content I'm writing.
But of course you can cross-post anything to the forum
if you like to discuss it there for some reason, but
anyway there're only a few people who actually discuss
Btw, mainly I don't like how its troublesome to make
my posts in the forums to look as I intend them to,
and also Ilia keeps his unstable hosting where DB server
is down 10% of the time.
> > Match model is a must, it seems
> I don't think so for PAQish match model. toffer has good
> example of about with match model vs without match model.
Well, I'm talking about a bytewise match model which
would encode a flag for rank0 byte values, and only
proceed with mixing and updating counters when its
not a rank0 symbol. Its a first step of unary coding
actually, and taking into account that rank0 hits
for more than 50% of bytes...
So its not a model specific for very long matches.
but it would probably work good enough for long matches
2008-07-07 11:26:05 inikep >
> A match model would allow to avoid all this mixing
> for 50-70% of data. (2x)
Nice, so altogether it's possible to get 6-10x.
> I'm still not really impressed, in fact.
> PPMd has the same speed, and no filters.
I don't believe that filters do matter for CCM's speed. MFC contains over 400 files in 30+ formats.
> Guess I should benchmark durilca'light.
Accoring to maximum compression durilca'light have almost the same decompression speed (173 s) as PPMd, but much slower compression (366 s).
2008-07-07 11:36:05 inikep >
>I'm especially interested in the paq9 results with
my finnish dictionary
It seems that this file suits PAQ. The results with lpaq8 are similar?
2013-08-13 12:27:18 >
2013-08-14 20:30:24 >
2013-10-14 06:39:21 >
2014-01-06 21:26:13 >
2014-05-11 19:01:39 >
2014-07-15 13:49:58 >
2014-11-07 01:12:46 >
2014-11-14 23:33:03 >
2014-11-25 21:54:37 >
2015-01-10 19:29:03 >
2015-01-24 14:06:04 >