Pei,

This just shows the complexity of "the usual meaning of the word
intelligence" --- many people do associate with the ability of solving
hard problems, but at the same time, many people (often the same
people!) don't think a brute-force solution show any intelligence.


I think this comes from the idea people have that things like intelligence
and creativity must derive from some very clever process.  A relatively
dumb process implemented on a mind blowingly vast scale intuitively
doesn't seem like it could be sufficient.

I think the intelligent design movement gets its strength from this
intuition.
People think, "How could something as complex and amazing as the human
body and brain come out of not much more than random coin flips?!?!?"
They figure that the algorithm of evolution is just too simple and therefore
dumb to do something as amazing as coming up with the human brain.
Only something with super human intelligence could achieve such a thing.

The solution I'm proposing is that we consider that relatively simple rules
when implemented on sufficiently vast scales can be very intelligent.  From
this perspective, humans are indeed the product of intelligence, but the
intelligence isn't God's, its a 4 billion year global scale evolutionary
process.


When "intelligence" is used on human, there is no problem, since few
hard problem can be solved by the human mind by brute-force.


Maybe humans are a kind of brute-force algorithm?  Perhaps the
important information processing that takes place in neurons etc.
is not all that complex, the amazing power of the system largely
comes from its gigantic scale?



At this point, you see "capability" as more essential, while
I see "adaptivity" as more essential.


Yes, I take capability as primary.  However, adaptivity is implied
by the fact that being adaptable makes a system more capable.


today, conventional computers
solve many problems better than the human mind, but I don't take that
as reason for them to be more intelligent.


The reason for that, I believe, is because the set of problems that they
can solve is far too narrow.  If they were able to solve a very wide range
of problems, through brute force or otherwise, I would be happy to call
them intelligent.  I suspect that most people, when faced with a machine
that could solve amazingly difficult problems, pass a Turing test, etc...,
would refer to the machine as being intelligent.  They wouldn't really care
if internally it was brute forcing stuff by running some weird quantum XYZ
system that was doing 10^10^1000000 calculations per second.  They
would simply see that the machine seemed to be much smarter than
themselves and thus would say it was intelligent.



for most people, that will
happen only when my system is producing results that they consider as
impressive, which will not happen soon.


Speaking of which, you're been working on NARS for 15 years!
As the theory of NARS is not all that complex (at least that was my
impression after reading you PhD thesis and a few other papers),
what's the hold up.  Even working part time I would have thought
that 15 years would have been enough to complete the system
and demonstrate its performance.

In Ben's case I understand that psynet/webmind/novamente have
all be fairly different to each other and complex.  So I understand
why it takes so long.  But NARS seems to be much simpler and
the design seems more stable over time?


It seems to me that what you are defining would be better termed
> "intelligence efficiency" rather than "intelligence".

What if I suggest to rename your notion "universal problem solver"?  ;-)


To tell the truth, I wouldn't really mind too much!  After all, once a
sufficiently powerful all purpose problem solver exists I'll simply ask
it to work out what the best way to define intelligence is and then
ask it to build a machine according to this definition.

See, even if my definition is wrong, a solution to my definition would
still succeed in solving the problem.  :-)


but I really don't see how you can put the current AGI projects, which
are as diverse one can image, into the framework you are proposing. If
you simply say that the one that don't fit in are uninteresting to
you, the others can say the same to your framework, right?


Sure, they might not want to build something that is able to achieve an
extremely wide range of goals in an extremely wide range of environments.

All I'm saying is that this is something that is very interesting to me,
and that it also seems like a pretty good definition of "intelligence".

Shane

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to