Netizens who were pre-virus too busy to grok Symbolic AI now have an excess
of time for delving into the previously undelvable.
http://ai.neocities.org/AiSteps.html
--
Artificial General Intelligence List: AGI
Permalink:
I had been holding back on flaming the compression threads because there
is some level of validity to the approach.
I mostly come from an engineering mindset, as in taking a robot or
avatar and implementing all the capabilities of the human baseline.
Another approach is the theoretical model and
On Saturday, March 21, 2020, at 6:28 PM, James Bowery wrote:
> Science, as usual, is left sucking hind tit.
Oh! I'm seeing something!
https://ibb.co/KN2N540
--
Artificial General Intelligence List: AGI
Permalink:
For the first 1,000,000 bytes of enwiki8
Green gets it to 253,800 bytes
Mine gets it to 251,235 bytes
For the first 10,000,000 bytes of enwiki8
Green gets it to 2,331,508 bytes
Mine gets it to 2,319,337 bytes
Note my parameters can still be tweaked to get that lower a bit.
7zip can't even get
Look up AIXI for the relevance of compression to AGI. You are rather
obligated to do this since "AGI" -- the term -- originated with AIXI. If
anyone is going to get "banned" from an "AGI" group, it should be those who
are attempting to appropriate the term "AGI" for vague definitions more
Data compression won't solve AGI. It's just a useful tool for evaluating
language models. It proved the usefulness of neural models over other
approaches and of modeling semantics before grammar. It proved the
unintuitive usefulness of massive computing power.
Compression is not so useful for
On Sat, Mar 21, 2020 at 12:00 PM Matt Mahoney
wrote:
> Data compression won't solve AGI. It's just a useful tool for evaluating
> language models...
>
I'd put it more like "Data compression _alone_ won't solve AGI. It's just
the gold standard for model selection. Model selection _alone_ won't
A lossless compression contest on video would result in contestants
spending 99.% of their efforts on compressing data that the eye and
brain throw away, assuming the payoff is the same for both types. Noise is
not just white noise, but all the details in the scene that don't increase
your
The best text compressors are neural networks that (my own does half of these!)
group words dog/cat/animal, count frequency, learns segmentation/byte pair
encoding, learns online by talking to itself and storing what it says to
itself, have weighting functions to mix activations, activation
To sum up: Prediction is your Future. Get it :)? That's how matter works, and u
will need Prediction to make AGI.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T2a0cd9d392f9ff94-M7598a7c44c707609a32e5571
On Sat, Mar 21, 2020 at 2:16 PM Matt Mahoney
wrote:
> A lossless compression contest on video would result in contestants
> spending 99.% of their efforts on compressing data that the eye and
> brain throw away, assuming the payoff is the same for both types.
>
"payoff" is relative to the
11 matches
Mail list logo