cross posting this:

I personally use lossless compression as a true evaluation for finding AGI, 
which is the goal behind the Hutter Prize. Of course it doesn't cover 
everything, AI minds need bodies/nanobots, rewards, surf the web, etc. Yet, so 
far the best compressors talk to themselves and do Online Learning and 
rearrange articles more neatly and group similar words, etc. So it seems like a 
million dollar idea. Makes you wonder is there any other evaluation we need if 
my goal is to massively lower the probability of death/pain on Earth? (That's 
where Earth is going with evolution and the Hutter Prize, it repairs missing 
data using prediction to make Earth a neater pattern fractal so save on energy 
wasted, adds it using Online Learning, then it uses its larger context tree to 
add new data even faster, exponentially growing in scale.) If I invent true 
AGI, it uses higher intelligent means (faster than brute force) to make/predict 
the future and form the settled state ('utopia') that physics will settle us 
down into, and so does it need to have a body (do we need another evaluation) 
if it knows enough about the real world that it can extract any unseen 
text/image it would have seen by a body? It has less need to gather real data. 
So in a sense, to invent AGI is to just predict well and smash the Hutter 
Prize. As for the rest of the evaluation for achieving immortality, we do need 
bodies to carry out tasks still, intelligence doesn't 'do it all', but we 
already have humanoid bodies with no brain really (and humans etc), so yes all 
we need to focus on is AGI. And big data. Yes the other evaluation being part 
of the intelligence evaluation is data/brain size; scale. Scale and Prediction 
is what we need to work on only. Note your prediction can be good but 
slow/large RAM needed, but it is less important and we can feel that effect 
i.e. it tells us the cure for cancer but had to wait a year = a big thank you 
still. As with scale, the more workers on Earth also the more faster we advance 
as well.

The first thing you notice is it is easy to get the enwiki8 100MB down to 25MB, 
but exponentially harder the lower you go. oh. So is intelligence solved? No. 
But you saw people are getting the data compression needs they need already and 
won't benefit much more now, so it seems! If a company compresses DNA data down 
from 100MB to 15MB, why would they/we work on the Hutter Prize so much to get 
13MB? Here's why: What if they had not 100MB, but 100GB? the amount cut off is 
now not 2MB, but 2GB! Also, the more data fed in, the more patterns and the 
more it can compress it! So not 2GB, but 4GB; 100GB becomes 11GB. Mind you, it 
is funny our AGI prize is being used to compress data as a tool. Though AI can 
think up any invention itself, so it doesn't seem odd exactly. Now, seeing that 
we get more compression ratio the more data fed into it, this means if we make 
the AI predictor more intelligent in finding/creating patterns, it Will result 
in a huge improvement, not 15MB>14MB>13.5MB>13.45MB>13.44MB. However I'm unsure 
that makes sense, maybe it is indeed harder the lower you go if we look at a 
ex. 100TB input ex. the limit is 2TB (and currently we'd ex. only reach 4TB). 
Perhaps this is harder the lower you go because there is less hints and higher 
uncertainty in some problems that require lots of knowledge. So instead it is 
more honoring to lower compression by just a little bit once it is lower. Of 
course probability gets worse for hard problems to predict well on, consider 
the question below for predicting the likely answer:

"Those witches who were spotted on the house left in a hurry to see the monk in 
the cave near the canyon and there was the pot of gold they left and when they 
returned back they knew where to go if they wanted it back. They knew the 
keeper now owned it and if they waited too long then he would forever own it 
for now on."
> Who owns what?
> Possible Answers: Witches own monk/witches own canyon/monk owns gold/monk 
> owns house/monk owns cave/cave owns pot/there was pot/he owns it

You can't help that hard issues have less probability of being predicted. But 
you can improve it still. These issues are hard because they haven't been seen 
much, no match. Yes, simply entailment and translation is used to see the 
frequency of the answer! How do I know this ;-; :D? Think about it, if real 
world data humans wrote says the turtle sat in water and yelped, you can be 
sure it will yelp, you can be sure a rolling ball will fall of a table, a 
molecule x will twist like x, etc, and if the words are unseen but similar ex. 
'the cat ate the ?' and you seen lots 'this dog digested our bread' then you 
know the probability of what follows (and what is what, cat=dog using contexts 
seen prior). This works for rearranged words and letters etc, typos, etc, 
missing or added words... So simply doing prediction will discover the true 
answers with highest probabilities. Of course, it may not be exactly as simple 
as a match is all that's needed and then you get the entailing answer (or 
translation) ex. 'the cure to caner is '. At least it doesn't seem like it, but 
it is. This requires you to take the word cancer, look into what entails IT; 
pain, rashes, what those are as well similar to, what entails THOSE, repeat, 
commonsense reasoning like BERT translation here yes, giving you more virtual 
generated data, and so you are finding yes the entailment to 'the cure to caner 
is ' except the prediction is not directly tied to those very words if you know 
what I mean.

Now if our predictor was a image predictor or movie generator, and also used 
text context (multi-sensory context predictor), we would have AGI 'talking' to 
itself learning new data online. And can share its thoughts of vision to talk 
to us using vision too. We also use Byte Pair Encoded segmentation like 
'sounds' (words...) to activate certain images, it is more efficient, images 
aren't as abstract in that way, but can be much more detailed low level! We 
will need both later to get better prediction. And yes, images are words, words 
describe image objects and they are inversely the same thing.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tcfc4df5e57c62b43-Mec511ad10ae1da64da07591a
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to