On Wed, Feb 18, 2015 at 12:09 AM, Mike Archbold <[email protected]> wrote:
> I think under this approach, it "bans" work on trying to actually
> figure out the answers to these tough questions, and instead places
> emphasis on replicating the means (mechanics of the brain) that
> generates whatever it is we call consciousness.  I mean, under this
> school of thought, if we don't know how  consciousness  solves hard
> problems, so what?  As long as it works by copying the
> physics/mechanics/etc of the brain (that being the obviously gigantic
> challenge, of course) that is all that is required.

Consciousness is the feelings (reinforcement signals, mostly positive)
that you associate with sensory perception and thoughts (recalled
memories) as they are written into episodic memory (memory associated
with a time or place). It only seems mysterious because reinforcement
signals alter your beliefs. Your brain works that way because it
increases your reproductive fitness. Even though you can't objectively
believe what I just stated, you want your consciousness to continue by
not dying.

You don't need to model consciousness to solve most AI problems like
vision, language, or robotics. You do need to model belief in
consciousness as well as other types of reinforcement learning (such
as beliefs in free will and identity) in order to model or predict
human behavior. It is not hard to do that once you understand where
these illusions come from.

It is a distraction to think that you have to replicate consciousness
to solve AI. It is like thinking that birds fly by some magic that you
have to replicate in order to build airplanes.

-- 
-- Matt Mahoney, [email protected]


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to