AW: [agi] constructivist issues

2008-10-24 Thread Dr. Matthias Heger
The limitations of Godelian completeness/incompleteness are a subset of the
much stronger limitations of finite automata. 

If you want to build a spaceship to go to mars it is of no practical
relevance to think whether it is theoretically possible to move through
wormholes in the universe.

I think, this comparison is adequate to evaluate the role of Gödel's theorem
for AGI.

- Matthias




Abram Demski [mailto:[EMAIL PROTECTED] wrote


I agree with your point in this context, but I think you also mean to
imply that Godel's incompleteness theorem isn't of any importance for
artificial intelligence, which (probably pretty obviously) I wouldn't
agree with. Godel's incompleteness theorem tells us important
limitations of the logical approach to AI (and, indeed, any approach
that can be implemented on normal computers). It *has* however been
overused and abused throughout the years... which is one reason I
jumped on Mark...

--Abram




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] constructivist issues

2008-10-24 Thread Dr. Matthias Heger
Mark Waser wrote:

Can we get a listing of what you believe these limitations are and whether 
or not you believe that they apply to humans?

I believe that humans are constrained by *all* the limits of finite automata

yet are general intelligences so I'm not sure of your point.


It is also my opinion that humans are constrained by *all* the limits of
finite automata.
But I do not agree that most humans can be scientists. If this is necessary
for general intelligence then most humans are not general intelligences.

It depends on your definition of general intelligence.

Surely there are rules (=algorithms) to be a scientist. If not, AGI would
not be possible and there would not be any scientist at all. 

But you cannot separate the rules (algorithm) from the evaluation whether a
human or a machine is intelligent. Intelligence comes essentially from these
rules and from a lot of data. 

The mere ability to use arbitrary rules does not imply general intelligence.
Your computer has this ability but without the rules it is not intelligent
at all.

- Matthias





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] constructivist issues

2008-10-24 Thread Eric Baum

 You have not convinced me that you can do anything a computer can't do.
 And, using language or math, you never will -- because any finite set of 
 symbols
 you can utter, could also be uttered by some computational system.
 -- Ben G

I have the sense that this argument is not air tight, because I can
imagine a zero-knowledge proof that you can do something a computer
can't do. 

Any finite set of symbols you utter *could*, of course, be utterable by
some computational system, but if they are generated in response to 
queries that are not known in advance, it might be arbitrarily unlikely
that they *would* be uttered by any particular computational system.

For example, to make this concrete and airtight, I can add a time element.
Say I compute offline the answers to a large number of
problems that, if one were to solve them with a computation,
provably could only be solved by extremely long sequential
computations, each longer than any sequential computation
that a computer that could 
possibly be built out of the matter in your brain could compute in an hour,
and I present you these problems and you answer 1 of them in half
an hour. At this point, I am going, I think, to be pursuaded that you
are doing something that can not be captured by a Turing machine.

Not that I believe, of course, that you can do anything a computer
can't do. I'm just saying, the above argument is not a proof that,
if you could, it could not be demonstrated.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com