On 4/16/2015 2:12 AM, LizR wrote:
On 16 April 2015 at 14:23, Bruce Kellett <bhkell...@optusnet.com.au <mailto:bhkell...@optusnet.com.au>> wrote:

    LizR wrote:

        In Bruno's "COMP 2013" paper he says
             The notion of the first person, or the conscious knower, admits the
            simplest possible definition: it is provided by access to basic
            memories. Consciousness, despite its non-definability, facilitates
            the train of reasoning in humans; but we justifiably might have used
            digital machines instead.

        Given this, in my opinion there is no problem with what is meant by 
step 3.
        Bruno makes no attempt to define personal identity beyond the contents 
of
        memories. Whether one "really" survives being teleported, or falling 
asleep and
        waking up the next day, isn't relevant. "Moscow man" is just the guy who
        remembers being Helsinki man, then finding himself in Moscow (for 
example).
        Hence Helsinki man can't predict any first person experience, only what 
will
        happen from a 3p view. Or if he didn't know duplication was involved, 
he would
        assume that he had a 50-50 chance of ending up in M or W.


    But this is a rather self-serving definition -- designed to fit in with the
    conclusion he wants to draw. We are entering the realm of the Humpty-Dumpty
    dictionary -- words no longer have their ordinary, everyday meaning.


In what way is it self-serving? It seems quite reasonable to say that a person is their memories, at least in a lot of important senses (Brent says it quite often, and he isn't a huge fan of comp).

As a side issue, I think it's the same - or similar - to the definition that was used by Everett? I haven't read his paper for a while but I seem to remember he used something like this, after all, what else can you really use apart from memory if you want to study how identity will persist over time within a given theory of physics? (For contrast, consider amnesia cases or the guy in "Memento").

That's an interesting question (although Bruno always says it's not relevant to his argument). Having a coherent, narrative memory seems like an obvious desideratum. But as you point out the guy in "Memento" stays the same person even though he can't form new long-term memories. So another possibility is what we would call in AI "running the same program". This seems to be what is captured by "counter-factual correctness". Of course any human-level AI will learn and so there will be divergence; but in a sense one could say two instances a program instantiated the same "person" with different memories. It would correspond to having the same character and predilections. For example we might build several intelligent Mars Rovers that are landed in different places. They would start with the same AI and memories (as installed at JPL) but as they learned and adapted they would diverge.

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to