Data compression won't solve AGI. It's just a useful tool for evaluating language models. It proved the usefulness of neural models over other approaches and of modeling semantics before grammar. It proved the unintuitive usefulness of massive computing power.
Compression is not so useful for evaluating images, video, or audio models. Lossless algorithms are overwhelmed by incompressible noise. Lossy models have to be evaluated subjectively for quality by humans. You can't automate lossy evaluation without a model at least as good as the one you are testing. Video has a theoretical information content of a few bits per second. That's the human accuracy of watching two videos consecutively and deciding if they are identical. We are 6 orders of magnitude away from solving it. On Sat, Mar 21, 2020, 11:24 AM Alan Grimes via AGI <[email protected]> wrote: > I had been holding back on flaming the compression threads because there > is some level of validity to the approach. > > I mostly come from an engineering mindset, as in taking a robot or > avatar and implementing all the capabilities of the human baseline. > > Another approach is the theoretical model and then applying it to the > specific cases such as robotics, etc. > > Now this second approach is a bit beyond what I am comfortable thinking > about but I can't deny it's a valid approach. > > Ok, compression. What links compression to AGI? > > Compression can be considered a valid approach to AGI to the extent that > the compression achieved reflects the creation of a functional > conceptualization, that is a conceptualization that can be removed from > the mechanics of the compression and the decompression and be used in > cognitive processes in general. In these threads, I'm just seeing a > sprint for compression ratio without focusing on exctracting or using > concepts that are actually useful to AGI research. > > Second, there is the focus on text compression. Text is an example of a > low-medium information density structured data. Audio and video > recordings also have these properties and are vastly more interesting as > research targets as there are none of the stupid hacks that could be > implemented with something as limited as text such as hard-coded > knowledge. So that's my second gripe, there is no focus on generalizable > principles of data preparation and pipelining for data in general that > would plausibly lead to AGI. =| > > -- > Clowns feed off of funny money; > Funny money comes from the FED > so NO FED -> NO CLOWNS!!! > > Powers are not rights. > ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T2a0cd9d392f9ff94-Md5dd1c0a94cdeaf890ff12d3 Delivery options: https://agi.topicbox.com/groups/agi/subscription
