On Monday, October 1, 2012 1:36:24 AM UTC-4, stathisp wrote:
> On Mon, Oct 1, 2012 at 1:45 AM, Craig Weinberg 
> <whats...@gmail.com<javascript:>> 
> wrote: 
> >> I don't doubt that initial experiments would not yield ideal results. 
> >> Neural prostheses would initially be used for people with 
> >> disabilities. Cochlear implants are better than being deaf, but not as 
> >> good as normal hearing. But technology keeps getting better while the 
> >> human body stays more or less static, so at some point technology will 
> >> match and then exceed it. At the very least, there is no theoretical 
> >> reason why it should not. 
> >> 
> >> 
> > 
> > I'm all for neural mods and implants. Augmenting and repairing brain = 
> > great, replacing the brain = theoretically viable only in theories 
> rooted in 
> > blind physicalism, in which consciousness is inconceivable to begin 
> with. 
> You're suggesting that even if one implant works as well as the 
> original, multiple implants would not. Is there a critical replacement 
> limit, 20% you feel normal but 21% you don't? How have you arrived at 
> this insight? 

If you have one brain tumor, you may still function. With multiple tumors, 
you might not fare as well. Tumors function fine on some levels (they are 
living cells successfully dividing) but not on others (they fail to stop 
dividing, perhaps because there is a diminished identification with the 
sense of the organ as a whole).

Because we are 100% ignorant of any objective ontology of consciousness, 
there is no reason to assume that an implant can possibly function well 
enough to act as a replacement on all levels, unless possibly if the 
implant was made of one's own stem cells (probably the best avenue to 

PS Someone posted a good AI related quote today that sort of applies: 

 "I think the point at which a computer program can be considered 
intelligent is 

the point at which — given an error — you, as the programmer, can say *it*made 
a mistake."

If an implanted device doesn't make mistakes, it isn't human intelligence. 
If it does make mistakes, it has to make the kinds of mistakes that humans 
can tolerate...the mistakes have to be sourced in the same personal agendas 
of living beings.



> -- 
> Stathis Papaioannou 

You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To view this discussion on the web visit 
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to