On 16 May 2015 at 08:56, meekerdb <[email protected]> wrote: > On 5/14/2015 7:24 PM, Bruce Kellett wrote: > >> LizR wrote: >> >>> On 15 May 2015 at 06:34, meekerdb <[email protected] <mailto: >>> [email protected]>> wrote: >>> >>> I'm trying to understand what "counterfactual correctness" means in >>> the physical thought experiments. >>> >>> You and me both. >>> >> >> Yes. When you think about it, 'counterfactual' means that the antecedent >> is false. So Bruno's referring to the branching 'if A then B else C' >> construction of a program is not really a counterfactual at all, since to >> be a counterfactual A *must* be false. So the counterfactual construction >> is 'A then C', where A happens to be false. >> >> The role of this in consciousness escapes me too. >> > > It comes in at the very beginning of his argument, but it's never made > explicit. In the beginning when one is asked to accept a digital > prosthesis for a brain part, Bruno says almost everyone agrees that > consciousness is realized by a certain class of computations. The > alternative, as suggested by Searle for example, that consciousness depends > not only of the activity of the brain but also what the physical material > is, seems like invoking magic. So we agree that consciousness depends on > the program that's running, not the hardware it's running on. And implicit > in this is that this program implements intelligence, the ability to > respond differently to different externals signals/environment. Bruno says > that's what is meant by "computation", but whether that's entailed by the > word or not seems like a semantic quibble. Whatever you call it, it's > implicit in the idea of digital brain prosthesis and in the idea of strong > AI that the program instantiating consciousness must be able to respond > differently to different inputs. > > But it doesn't have respond differently to every different input or to all > logically possible inputs. It only needs to be able to respond to inputs > within some range as might occur in its environment - whether that > environment is a whole world or just the other parts of the brain. So the > digital prosthesis needs to do this with that same functionality over the > same domain as the brain parts it replaced. In which case it is > "counterfactually correct". Right? It's a concept relative to a limited > domain. >
Thatnks, I see the point now - that the programme must be capable of responding to a certain range of inputs seems fair enough - consciousness responds to its surroundings, but has difficulty with novel inputs. (I don't see how this affects the MGA, however, which limits the computation in question to a re-run with the same inputs. Under those very specific, very limited circumstances, the computation can only follow the same path that it did previously.) -- You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. Visit this group at http://groups.google.com/group/everything-list. For more options, visit https://groups.google.com/d/optout.

