I remember this guy - he is on this list? Haha I'm reading his research notes 
and I like them! He says (1969) exactly what I said when I first started AI 
research:
http://www.adaptroninc.com/BookPage/1969-and-1970

"98.  Sight and attention: I can pay attention to a spot on the wall and my 
attention is on a specific very small area of the retina. I also can pay 
attention to something out of the corner (side) of my eye. I can stare at one 
thing but not see it but see something out of the corner of my eye but not in 
so much detail as if I looked straight at it. So my attention can switch not 
only to sight sound and feel but to a specific area of sight or even just the 
general picture of what I’m looking at. Now when you imagine something you 
combine the small specific sight areas into a general picture. Like a man with 
green togs a straw hat walking on a beach, each specific thing is seen in 
memory and then combined to form a general picture."

He sounds very precise so far, having first written about how the body has 
wires, I/O, electricity (the "blood"), etc. That's something I do too.


I read Ben's 2 books from 2014 on CogPrime in 2 days, I skimmed the 2nd as it 
was just saying what names of things were etc (2nd is only useful if you want 
to look at his code I assume). They were interesting. Any recent change to the 
design or is it pretty similar? Do you Ben have an evaluation for your AI on 
the Hutter Prize test? If not, please try it! BTW how do Truth Values improve 
Prediction on text/images? And does CogPrime use Backprop?

My design for AGI is actually all synergy. It merges so much very tightly. I 
bet Ben didn't expect me to be doing well in the synergy department, and I also 
didn't expect to see in his book him talk about (well it can be a slippery 
topic in what one means) what he said was the wrong way to talk about AGI i.e. 
he said synergy not mechanisms yet here he says it like I do lol!:

"Why might a solid, objective empirical test for intermediate progress toward 
AGI be an infeasible notion? One possible reason, we suggest, is precisely 
cognitive synergy, as discussed above. The cognitive synergy hypothesis, in its 
simplest form, states that human-level AGI intrinsically depends on the 
synergetic interaction of multiple components (for instance, as in CogPrime, 
multiple memory systems each supplied with its own learning process). In this 
hypothesis, for instance, it might be that there are 10 critical components 
required for a human-level AGI system. Having all 10 of them in place results 
in human-level AGI, but having only 8 of them in place results in having a 
dramatically impaired system—and maybe having only 6 or 7 of them in place 
results in a system that can hardly do anything at all. Of course, the reality 
is almost surely not as strict as the simplified example in the above paragraph 
suggests. No AGI theorist has really posited a list of 10 crisply defined 
subsystems and claimed them necessary and sufficient for AGI. We suspect there 
are many different routes to AGI, involving integration of different sorts of 
subsystems. However, if the cognitive synergy hypothesis is correct, then 
human-level AGI behaves roughly like the simplistic example in the prior 
paragraph suggests. Perhaps instead of using the 10 components, you could 
achieve human-level AGI with 7 components, but having only 5 of these 7 would 
yield drastically impaired functionality—etc. Or the point could be made 
without any decomposition into a finite set of components, using continuous 
probability distributions. To mathematically formalize the cognitive synergy 
hypothesis becomes complex, but here we’re only aiming for a qualitative 
argument. So for illustrative purposes, we’ll stick with the “10 components” 
example, just for communicative simplicity."

I liked the part when Ben talks about the car-man! And that patterns create new 
patterns - it's true. And how to know when we are 50% or 75% understood AGI. 
And balancing how/if should all of Earth or just a few people be allowed to 
have AGI and realizing it's out of our control and already all over Earth and 
how good wins usually. I thought about all these, too.

Ben says the intelligence of kids is a useful post-goal to help us design AGI 
and teach it, yes kids do grow new intelligence functions as they age even 
though their native intelligence are hardwired. It may take them a year to 
build semantics and especially categories, so kids actually do have less 
utilization or appearance of all native hardwiring as well. Also our post-goals 
should be also ex. prehistoric intelligence emulation. So, monkeys, lizards, 
cells, molecules, particles.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T0d219e496eb3dd03-M9cd247a865dad611d56343e4
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to