Greetings Jim,
Have some questions and comments about the recent email...
(thread used to be: Scholarpedia article on AGI published)
On generating possibilities...
Complexity seems to be the issue that lies underneath much of your
efforts. And, there is no doubt that life has it's complexity. I don't
recall (didn't search past emails) any explanation from you, that ties
this complexity to an AGI implementation.
I personally have an issue with complexity and that is because it is
over-rated. In many writings there is an implication that AGI type
intelligence will emerge through the construction of a unit that will
try lots of things and get feedback and thereby learn what creates the
best reward. Sort of an evolutionary intelligence. Do we really
believe that? I don't, do you?
I've seen reference to "search space" and the implication is that the
intelligent unit is going to find an efficient way to navigate this huge
space of possible "stuff." Am I off track in assuming that this may be
the connection with trying to discover the p=np solution? Thus enabling
the traversing this search space more efficiently. Is this how you see
a better intelligence created?
Perhaps a simple example will suffice to see my contrasting view of the
development of intelligence. Consider a new born baby, or imagine you
create a computer baby that has a sound generating capability. One
could program the computer to generate random sounds and then watch for
feedback at the keyboard or through a listening device. Good luck with
that. There are several ways one might alter the experiment and produce
suggestive results. But, lets save time because we know that a human
intelligence, the baby, does not "simply" try lots of sounds and thereby
become a speaking person. Instead, the baby learns by lots of feedback
and, importantly, by mimicking the sounds that it hears. What does this
prove?
First, it shows that the baby isn't developing an intelligence by
randomly trying every combination. Rather, the baby is "adopting"
behaviors that produce positive feedback from the adult who is nurturing
the baby. Enough about babies...
The computer intelligence will not evolve from nothing, far from it. It
will need a core set of behaviors to work with. "Thousands of different
approaches" could refer to the thousands of "cores" that are eventually
developed by different researchers. The core will be a big factor in
how the intelligent unit goes about acquiring behaviors to add to it's
core. And, as you might suppose, the unit will be influenced by it's
surroundings, the core isn't everything.
To net it out... It isn't the "learning" by trial and error that is
significant, it is the adoption process that dwarfs the "scientific"
gathering of truth. The baby AGI doesn't need lots of experiments, it
needs good sources to guide it's adoption process.
Stan
On 12/02/2015 07:36 AM, Jim Bromer wrote:
...
It was useful to me only because it showed me how I might generate
variations on the abstractions underlying some purpose. So my
methodology might be used in an actual AI program to generate
possibilities, or more precisely, generate the basis's for the
possibilities. It would not stand as a basis for AI itself.
Solomonoff's methods do not stand as a basis for AI. (Years from now,
when there are thousands of different approaches to AI and AGI we
might still disagree about this.) But it should be clear that I am
saying that Solomonoff's universal prediction methods are not actually
standards for viable AI. Compression methods are not a viable basis
for AI either. They could be used as part of the process but it is
pretty far fetched to say that any computational technique that could
be used in an AI or AGI program could be used as a basis for AI.
I just wanted to follow through since you wrote a reasonable reply to
my first reply.
-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription:
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com