On Jul 23, 6:36 pm, Craig Weinberg <whatsons...@gmail.com> wrote:
> On Jul 23, 12:02 pm, 1Z <peterdjo...@yahoo.com> wrote:
>
> > On Jul 23, 1:27 pm, Craig Weinberg <whatsons...@gmail.com> wrote:
>
> > > On Jul 23, 12:14 am, meekerdb <meeke...@verizon.net> wrote:
>
> > > > On 7/22/2011 8:52 PM, Craig Weinberg wrote:
>
> > > > Where does the badness come from?  The afferent neurons?
>
> > > It comes from the diminishing number of real neurons participating in
> > > the network, or, more likely, the unfavorable ration of neurons to
> > > pegs.
>
> > Ie, the replacements are not functionally equivalent, even though
> > they are stipulated as being equivalent.
>
> No. You're equating the function of the network with the identity of
> the participants. I can have an incoherent conversation over a crystal
> clear phone system if I am trying to talk to people who are no longer
> there, but have only voicemail. Even elaborate voicemail which
> operates at the phonetic level to generate AI responses in any
> language is not necessarily going to be able to answer my questions:

I have no idea what any of that has to do with functional equivalence.

> 'Hey Freddie28283457701, did you get the glutamate I ordered yet?'
> 'Thank you for calling. Your call is important to us. Please stay on
> the line'. 'Wow that really sounds just like you Freddie, now where is
> the damn glutamate?'
>
> > Indentical in all relevant aspects is good enough. That's a necessary
> > truth.
>
> It's not possible to know what the relevant aspects are

says who?

>. What are the
> relevant aspects of yellow?
>
> > It might
> > be the case that all relevant aspects are all aspects (IOW.,holism is
> > true
> > and functionalism is false). That isnt  a necessary truth either way.
> > It
> > needs to be argued on the basis of some sort of evidence.
>
> Not necessarily all aspects, but my hypothesis is that you need
> material technologies to simulate more than the top level semantic i/
> o. Water seems to be important in distinguishing that which can live
> and that which cannot. I might start there.

That still needs to be argued.


> > > To set the equivalence between the natural and artificial neuron in
> > > advance is to load the question.
>
> > and vice versa.
>
> The burden of proof is on the hypothetical artificial neuron to prove
> it's equivalent.

It's hypothesised as equivalent.

>The natural neuron doesn't have to prove that it's
> nothing more than the artificial one since we know for a fact that our
> entire world is somehow produced in the brain without any external
> evidence whatsoever of that world.



> > > It's not that they have to *be* biological, it's that the simulation
> > > has to use materials which can honor the biological level of
> > > intelligence as well as the neurological.
>
> > Why? If what you have is a functional black
> > box ITFP, the it doens't mater what is inside
> > the black box.
>
> It does if you ARE the black box.

If you can be replaced, preserving functionality, maybe
you don't matter so much

> Craighttp://s33light.org

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to