Mike Tintner wrote
> What is totally missing is a philosophical and semiotic perspective. A 
> philosopher looks at things v. differently and asks essentially : how much

> information can we get about a given subject (and the world generally)? A 
> semioticist asks: how much and what kinds of information about any given 
> subject (or the world generally) can different forms of representation
give 
> us? (A verbal description, photo, movie, statue will all give us different

> forms of info and show different dimensions of a subject).

> The AI-er asks how much information (about the world) can I and my machine

> handle? The philosopher: how much information about the world can we 
> actually *get*? - How knowable is the world? ANd what do we have to do to 
> get and present knowledge about the world?

I think, the typical AIer asks for clever algorithms and source code.
Certain AIer see their problem as a part of control theory. Other see 
Their problem as the problem of emulating the biological brain. 

But very few ask about the regularities of our world. May be, because they
think that this should be done by the 
intelligent software we want to build. But I am convinced that goal directed
engineering to develop AGI is only possible
if we model the most basic regularities of our universe in the AGI
algorithms.

Humans make experiences in single states and can generalize the new
knowledge to huge domains. Most AIer ask only: How do we generalize? But the
answer depends on the question: Why is it even possible that we may
generalize? And this question is rarely considered.

This was the point I wanted to say, that our universe does not only help
life to evolve but it seems to be very friendly 
for intelligence because our universe is full of regularities at all levels
from microcosm to macrocosm.
And useful intelligence is only possible in a world with regularities
because only with regularities you can avoid to search through trillions of
states.

For example: I can talk tomorrow with a person who I have never seen before.
I can do this just from my social experiences of the past with other people.
Why is this possible?
Another example: I see a mosquito for the first time in my life. I hear the
sound. I see how it lands on my right arm.
And I see it flying away. Finally my skin becomes red at that place and I
feel the little pain at my. 
Why can I conclude the mosquito is the reason for the pain? Why can I know
that the same could happen, if the mosquito would land on my shoulder? Why
do I know that the room is not important for the phenomenon of the pain?

I think, an AIer must ask such questions.
And he has to see it from both perspectives: On the one hand the software
engineer who designs the intelligent algorithm.
On the other hand the scientist who thinks about nature.



-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to