http://agi-conf.org/2010/wp-content/uploads/2009/06/paper_6.pdf
I refuse to call papers papers anymore, they are just text.


Let me override this discussion:
There is only 3 ways to test for AGI:

1) The algorithm/body works like us. This won't tell us so much that it works 
good though.

2) Scores/compresses good on a large diverse set of tasks. This does tell us.

3) Generates useful outputs on a large diverse set of tasks ex. it generates 
good looking text/image/tool/job completions. This too, just as much sometimes.


Wang seems to confuse 2 things together as the same thing, and try to combine 
them too:

"Now we see that the empirical approach and theoretical approach of evaluation 
actually depend on each
other for the meta-evaluation."

No, generating solutions, and generating the actual procedure in real life to 
DO it (collect data/ find a cure/ give needle/ etc), are the same thing, we 
recognize them as a good thing, theories and working cures for humans made by 
AI are BOTH as useful, a cure for cancer of AI may seem untested crap until 
proven by a working invention but this isn't the case, ideas can be really 
accurate if you try. Ideas and inventions are same thing. This evaluation is 
the #3) I listed above! It's subjective, it sounds great as a completion, but 
YOU - not Lossless Compression, have to check it, very finely, to make sure it 
is the best solution, and must check thousands of tasks (solving building 
higher towers, solving cancer, inventing better storage devices). This is very 
wobbly, especially if it has no body to actually implement a full working idea 
to solve cancer and can only make semi-discoveries. That's, why we need to test 
for SOME accuracy, not that it can solve cancer fully. Way #3) is very wobbly, 
it's good but not actually telling you it is finding patterns in data.

*We recognize/ predict the same way. **Humans wrote text, so it therefore is 
extremely **patterny** as humans are, because we predict text and our generated 
text we wrote is what we recognize/ predict. It is the inner patterns of a mind 
written out, or drawn if use images made by us. The world too does this, 
physics makes patterns. Our data is actually more **advanced** in evolution 
though.*

"NARS is based on the belief that “intelligence” is
the capability of adaptation with insufficient knowledge
and resources. This belief itself is justified empirically
— the human mind does have such capability"

This is just pattern finding. you take only some snapshots of the world using 
only some diverse sensors and so on and can get something much more approx./ 
like a full atom by atom brute force simulation.

Intelligence is using patterns/ experiences and making your homeworld into a 
pattern fractal to increase prediction accuracy and becoming a pattern 
(immortality), things that die die and things that are smart clone and last 
longer and outlive others. The future world will be a cooled-down metal dark 
less-dense airless gravityless place that only acts when there is danger from 
external inputs. Life is whatever acts like a statue/ metal, rocks are Life, 
not aliens only. Women and donkies and rocks are not sexy, we are born with the 
reward to predict/ see/ be near them because we need clones to help us with 
work and survive deaths of "employees" in the homeworld, parent DNAs combine to 
try likely good traits ex. man+wings, instead of just random mutations.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T466c51e997fec20a-Md79f88089c97c918b639a8fe
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to