> According to the paper available(quite interesting), it seems that VQF is
> the 2nd release of TwinVQ.

Possible.  I lost direct touch with its development in '96.

> note: vector quantization, in picture or music, is good for low bitrates, as
> it produces some good results. However, using only some pre-built codebooks
> (as with TwinVQ), is it impossible to reconstruct an high-accuracy
> reproduction of the original data

...and the informal proof is even easy (large codebooks, which are necessary
for precision, become too difficult to practically deal with)  This is why
Vorbis uses 'whatever' (where the rest of Vorbis is similar in many ways to
twinvq)

> I'm not so sure that it has been poorly marketed.

For the first several years after it appeared, it was only being used for
'remote karaoke' (at the point when mp3 was beginning to conquer the net :-)
My statement about poor marketing actually came indirectly by way of NTT people
who feel that NTT has done a poor job of promoting the technology (and has that
problem with new technology it develops in general).

> The goal of VQF was
> perhaps only to show validity of TwinVQ, and as it's now incorporated in
> mpeg-4 natural audio, it can be considered as a good (future) commercial
> result.

Developing a technology or release of that technology solely for proof of
concept is going to doom the release as the result of zero perceived follow
through. "Why should I use this? It's only a proof of concept."  Now the public
opinion of TwinVQ is that it's an unsupported, ineffective technology and that
will continue to follow it despite inclusion in MPEG 4.

All the MPEG-4 news I ever see has to do with AAC (here we have a practically
observable benchmark for 'poorly marketed' :-) Most of the spin around TwinVQ
seems to have an air of "please take us seriously." It's a pity really; twinvq
is a pretty clever encoding.

Monty


--
MP3 ENCODER mailing list ( http://geek.rcc.se/mp3encoder/ )

Reply via email to