Mike Tintner [mailto:[EMAIL PROTECTED]  wrote

And that's the same mistake people are making with AGI generally - no one 
has a model of what general intelligence involves, or of the kind of 
problems it must solve - what it actually DOES - and everyone has left that 
till later, and is instead busy with all the technical programming that they

find exciting - with the "how it works" side -  without knowing whether 
anything they're doing is really necessary or relevant..

-------------------------------------------

Some people have models but it is not clear whether they are right or how
many computational costs they have.
In this case it is useful to write the code and see what it can do and where
are the limits.

Intelligence is a very special problem. There is no well defined
input-output relation. For any problem which can be specified by a table of
input to output there is a trivial program which solves this problem: The
program reads the input from the table and returns its output. In this
sense, every well defined problem can be solved by a program, which is not
intelligent. 

If we accept, that intelligence can never be specified by a complete well
defined input-output relation, then intelligence must be a PROPERTY of the
algorithm which behaves intelligent. Especially GENERAL Intelligence cannot
be defined by black-box behavior (=complete input-output relation). It is a
white box problem. The turing test is a weak test, since if I ask n
questions and obtain n answers which seems to be human-like, then a table of
these questions and answers would do the same.
After the turing test, I will be never sure, if the human-like behavior
holds for question n+1, n+2, ... Therefore, we must know what is going on in
the machine, in order to be sure that it acts intelligent in most different
situations. The turing test was invented because we still have no complete
model of necessary and sufficient conditions of intelligence.


If you define the universe as a set of objects with relations among each
other and dynamic laws, then an important condition of a general intelligent
system is the ability to create representations of all kinds of objects, all
kinds of relations and all kind of dynamic laws which can be inferred from
sensory inputs the AGI-system perceives. You see, that we cannot give a
table of input-output pairs for this problem. We must define a general
mechanism which can extract the patterns from the input stream and creates
the representations. This is already a white-box problem but it is a problem
which can be solved and algorithms can be proven to solve it, I suppose.

The problem of consciousness is not only a hard problem because of unknown
mechanisms in the brain but it is a problem of finding the DEFINITION of
necessary conditions for consciousness. 
I think, consciousness without intelligence is not possible. Intelligence
without consciousness is possible. But I am not sure whether GENERAL
intelligence without consciousness is possible. In every case, consciousness
is even more a white-box problem than intelligence.



-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com

Reply via email to