On Tue, Jun 9, 2020 at 2:15 PM 'Brent Meeker' via Everything List <
[email protected]> wrote:

>
>
> On 6/9/2020 10:08 AM, Jason Resch wrote:
> > For the present discussion/question, I want to ignore the testable
> > implications of computationalism on physical law, and instead focus on
> > the following idea:
> >
> > "How can we know if a robot is conscious?"
> >
> > Let's say there are two brains, one biological and one an exact
> > computational emulation, meaning exact functional equivalence. Then
> > let's say we can exactly control sensory input and perfectly monitor
> > motor control outputs between the two brains.
> >
> > Given that computationalism implies functional equivalence, then
> > identical inputs yield identical internal behavior (nerve activations,
> > etc.) and outputs, in terms of muscle movement, facial expressions,
> > and speech.
> >
> > If we stimulate nerves in the person's back to cause pain, and ask
> > them both to describe the pain, both will speak identical sentences.
> > Both will say it hurts when asked, and if asked to write a paragraph
> > describing the pain, will provide identical accounts.
> >
> > Does the definition of functional equivalence mean that any scientific
> > objective third-person analysis or test is doomed to fail to find any
> > distinction in behaviors, and thus necessarily fails in its ability to
> > disprove consciousness in the functionally equivalent robot mind?
> >
> > Is computationalism as far as science can go on a theory of mind
> > before it reaches this testing roadblock?
>
> If it acts conscious, then it is conscious.
>

That is the assumption I and most others operate under.

But every now and then you encounter a biological naturalist or something
that says a brain must be made of brain cells to actually be conscious.

The real point of my e-mail is to ask the question: can any test in
principle disprove computationalism as a philosophy of mind, given it's
defined as functionally identical?



>
> But I think science/technology can go a lot further.  I can look at the
> information flow, where is memory and how is it formed and how is it
> accessed and does this matter or not in the action of the entity.  It
> can look at the decision processes.  Are there separate competing
> modules (as Dennett hypothesizes) or is there a global workspace...and
> again does it make a difference.  What does it take to make the entity
> act happy, sad, thoughtful, bored, etc.


 I agree we can look at more than just the outputs.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUg%2BEpMKfAdmFbwk_JTQS-XqZKCmi-U9D804HNQh2SgSkQ%40mail.gmail.com.

Reply via email to