On 23/01/2008, Günther Greindl <[EMAIL PROTECTED]> wrote:
> I find the theory very compelling, as I always found the functionalistic
> AI approach a bit lacking whereas I am a full endorser of a
> materialistic/monist approach (and I believe strong AI is feasible). EM
> fields arising through the organization of matter and its _dynamics_
> seems to me very plausible - at least to start significant research in
> this direction.

I'd agree that the functionalist AI approach doesn't explain
consciousness well. But with the definitions of intelligence that have
been mooted on this list, consciousness might not be needed for the
forms of AI that we are interested in. We might be happy with zombies.

My own approach to making computers more brain-like, is entirely
interested in the sub-conscious level, at the moment. Trying to find a
computer system that allows analogous changes that occur in london
cabbies brains* (autonomously applying more resources to important
problems), in modern computer systems.

I suppose what I am trying to say is that there is a lot of scope for
making computers better at problem solving/reasoning/adapting before
we hit the consciousness problem. If this is correct and the
types/shapes of EM fields are important, I'm not sure we will have
much scope for creating human-like consciousnesses apart from the old
fashioned way and biotech.

 Will Pearson

*http://news.bbc.co.uk/1/hi/sci/tech/677048.stm

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=89123294-b3e698

Reply via email to