Re: Extended Wigner’s Friend

2018-12-03 Thread Stathis Papaioannou
On Tue, 4 Dec 2018 at 12:47, Mason Green  wrote:

> Here’s a recent editorial I found in the magazine arguing against
> Many-Worlds on the grounds that it denies the reality of experience or the
> self. (
> https://www.quantamagazine.org/why-the-many-worlds-interpretation-of-quantum-mechanics-has-many-problems-20181018/
> )
>
> Well, if we don’t want many-worlds or subjectivism, than the only other
> option looks like it’d be to modify QM itself. Some form of digital physics
> might work, otherwise we could have objective collapse (either random, or
> else there’s something/someone outside the universe choosing which path the
> universe follows).
>

That article just claims, without explanation, that it would be impossible
to have consciousness if you were continually being duplicated. I don't
think you should accept this without further thought.

-- 
Stathis Papaioannou

<https://www.avast.com/sig-email?utm_medium=email_source=link_campaign=sig-email_content=webmail>
Virus-free.
www.avast.com
<https://www.avast.com/sig-email?utm_medium=email_source=link_campaign=sig-email_content=webmail>
<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Why is Church's thesis a Miracle?

2018-09-23 Thread Stathis Papaioannou
On Sun, 23 Sep 2018 at 5:19 pm, John Clark  wrote:

> On Sat, Sep 22, 2018 at 3:39 AM Bruno Marchal  wrote:
>
> > *Given the definition of the first person*, [...]  *By the definition
>> of the first person notion* [...]
>>
>
> You act as if you've given a robust definition of "the first person" that
> doesn't fall apart into logical contradictions at the first use of a people
> copying machine, or even with nothing more than the passage of time. But
> you never have. For example, you'll say things like "the first person"
> means the conscious being experiencing Helsinki today and then try to
> predict what "the first person" will experience tomorrow. But even if we
> forget about people copying machines and stay put in Helsinki if that's
> your definition of "the first person" then "the first person" will not
> exist at all tomorrow because tomorrow nobody will be experiencing Helsinki
> today.
>
> And then you will say the man experiencing Moscow tomorrow could not have
> predicted that he would be doing that today, and that's true but only
> because today the man experiencing Moscow tomorrow does not exist so he's
> unable to do ANYTHING, and that includes making predictions.  I've made
> this point many times before of course and each time your only defence is
> I'm "just playing with words", an odd defence from somebody who claims to
> be a logician.
>
> I don't have the problem that Bruno has because I define "the Helsinki
> man" as anyone who remembers being the Helsinki man today, but if Bruno
> accepted my definition and followed its logical consequences he'd have to
> conclude that the Helsinki man will see 2 cities not one and saw them both
> at the exact same time. And this conclusion could be proven by interviewing
> both the Moscow man and the Washington man provided that before any copying
> was done the Helsinki man himself agreed on the definition of "the Helsinki
> man". Yes if you asked the Washington or Moscow man how many cities they
> saw they would say only one, but that is the wrong question to ask. The
> correct question to ask is "How many cities do you think the Helsinki man
> ended up seeing at the same time?". If they are logical and truthful they
> will answer "I don't have enough information to answer that but If the
> experiment went as planned and my brother really is in that other city then
> the Helsinki man ended up seeing 2 cities at exactly the same time".
>
> *>That is pseudo-religion. You talk like a member of the clergy.*
>
>
> And you talk as if you hadn't repeated verbatim that same schoolboy insult
> 6.02*10^23 times before. By your next post I wouldn't be surprised if the
> tally reached (6.02*10^23) +1
>
> > *Handwaving and insults just confirms that you have decided to not
>> understand.*
>>
>
> Speaking of hand waving, nobody can explain who exactly is supposed to
> make the prediction, or who or what the prediction is about, and even after
> the event is over there is no way even in principle to know if the
> prediction turned out to be correct or not. So it's true I am confused I
> don't understand, but anybody who thinks they understand gibberish is a
> fool.
>

To me and probably to many others it seems obvious that the Helsinki man
can expect to end up either in Moscow or Washington after the duplication.
Can you perhaps step outside of the argument and speculate as to why there
should be such disagreement, why you imagine some people would think it is
obvious when you think it is not only not obvious, but ridiculous?

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: The Many Incarnations of Bruno

2018-08-18 Thread Stathis Papaioannou
On Sun, 19 Aug 2018 at 3:15 am,  wrote:

> Let's say Bruno does a single slit experiment, aka diffraction, and just
> one trial. For each possible outcome, there will be a world with a copy of
> Bruno. If we assume space-time is continuous, we get an uncountable number
> of Brunos for just a single trial. More trials, more uncountable Brunos.
> But Bruno can imagine doing a double slit experiment instead. More
> uncountable Brunos. But Bruno can imagine a triple slit experiment, a
> quadruple slit experiment, and so on, each adding uncountable copies of
> Bruno for just a single trial. Is this a plausible model of the universe,
> all to get rid of collapse, which can just as well be left as an unsolved
> problem in QM? AG
>

It’s incredible that the universe is so large, or that classical physics is
wrong, or that irrational numbers exist, or millions of other facts that
might surprise an ape that starts to contemplate the nature of reality. The
argument from incredulity alone does not carry much weight.

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Many-minds interpretation?

2018-08-14 Thread Stathis Papaioannou
On Wed, 15 Aug 2018 at 3:30 am,  wrote:

>
>
> On Tuesday, August 14, 2018 at 2:02:05 AM UTC, stathisp wrote:
>
>>
>>
>> On Tue, 14 Aug 2018 at 06:58,  wrote:
>>
>>>
>>>
>>> On Monday, August 13, 2018 at 2:27:55 PM UTC, Jason wrote:
>>>>
>>>>
>>>>
>>>> On Mon, Aug 13, 2018 at 12:05 AM Bruce Kellett 
>>>> wrote:
>>>>
>>>>> From: Jason Resch 
>>>>>
>>>>>
>>>>> On Sun, Aug 12, 2018 at 5:06 AM Bruno Marchal 
>>>>> wrote:
>>>>>
>>>>>>
>>>>>> On 11 Aug 2018, at 02:29, Bruce Kellett 
>>>>>> wrote:
>>>>>>
>>>>>> They do not "belong to different branches" because they do not exist,
>>>>>> and have never existed. This notion seems to be important to your idea, 
>>>>>> and
>>>>>> I can assure you that you are wrong about this.
>>>>>>
>>>>>>
>>>>>> How could that be possible? You suppress the infinities of Alice and
>>>>>> Bob only because you know in advance what is the direction in which Alice
>>>>>> will make her measurement. What if she changes her mind?
>>>>>>
>>>>>>
>>>>> Right.
>>>>>
>>>>> I would like Bruce to consider the case Alice measures alternately x
>>>>> and z spin axes of an electron 1000 times and interprets those measurement
>>>>> results as binary digits following a decimal point to define the real
>>>>> number to which she will set her measurement angle to (before she measures
>>>>> her entangled particle).
>>>>>
>>>>> Certainly in the no-collapse case there would be at least 2^1000
>>>>> Alices who perform the measurement at each of the possible measurement
>>>>> angles that can be defined by 1000 binary digits.  What I wonder is how
>>>>> many Alices Bruce would believe to exist in this scenario before she
>>>>> measures her entangled particle.
>>>>>
>>>>>
>>>>> How do 2^1000 copies of Alice make any difference? Each measures the
>>>>> entangled particles only once. Besides, This is not what is done. I see
>>>>> little point in making up alternative scenarios -- why not explain the
>>>>> straightforward original scenario? Imaginary copies are beside the point.
>>>>>
>>>>> If you cannot focus your attention on the original scenario, I see
>>>>> little point in your trying to do physics.
>>>>>
>>>>
>>>> I bring this question up because you repeatedly refer to only "one
>>>> Alice" before the measurement, and also say that Alice and Bob are "in one
>>>> and the same branch" prior to measurement.  But normal QM without collapse
>>>> would say Alice and Bob are branching all the time, even before they
>>>> measure their entangled pair.
>>>>
>>>
>>>
>>> *They're branching all the time prior to measurement, that is without
>>> collapse? Pretty fantastic. Where, how, is this affirmed by QM? AG*
>>>
>>
>> Collapse is not part of the formalism of QM,
>>
>
> *It is. The collapse postulate states that after the measurement of some
> eigenvalue, the system, originally in a superposition, evolves immediately
> into the eigenstate of the eigenvalue which has been measured. AG*
>

Perhaps this is semantics, but that is more part of the interpretation,
because removing the postulate does not change the predictions of the
theory; otherwise, we could suggest an experiment to settle the matter
rather than have these debates.

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Many-minds interpretation?

2018-08-13 Thread Stathis Papaioannou
On Tue, 14 Aug 2018 at 06:58,  wrote:

>
>
> On Monday, August 13, 2018 at 2:27:55 PM UTC, Jason wrote:
>>
>>
>>
>> On Mon, Aug 13, 2018 at 12:05 AM Bruce Kellett 
>> wrote:
>>
>>> From: Jason Resch 
>>>
>>>
>>> On Sun, Aug 12, 2018 at 5:06 AM Bruno Marchal  wrote:
>>>
>>>>
>>>> On 11 Aug 2018, at 02:29, Bruce Kellett 
>>>> wrote:
>>>>
>>>> They do not "belong to different branches" because they do not exist,
>>>> and have never existed. This notion seems to be important to your idea, and
>>>> I can assure you that you are wrong about this.
>>>>
>>>>
>>>> How could that be possible? You suppress the infinities of Alice and
>>>> Bob only because you know in advance what is the direction in which Alice
>>>> will make her measurement. What if she changes her mind?
>>>>
>>>>
>>> Right.
>>>
>>> I would like Bruce to consider the case Alice measures alternately x and
>>> z spin axes of an electron 1000 times and interprets those measurement
>>> results as binary digits following a decimal point to define the real
>>> number to which she will set her measurement angle to (before she measures
>>> her entangled particle).
>>>
>>> Certainly in the no-collapse case there would be at least 2^1000 Alices
>>> who perform the measurement at each of the possible measurement angles that
>>> can be defined by 1000 binary digits.  What I wonder is how many Alices
>>> Bruce would believe to exist in this scenario before she measures her
>>> entangled particle.
>>>
>>>
>>> How do 2^1000 copies of Alice make any difference? Each measures the
>>> entangled particles only once. Besides, This is not what is done. I see
>>> little point in making up alternative scenarios -- why not explain the
>>> straightforward original scenario? Imaginary copies are beside the point.
>>>
>>> If you cannot focus your attention on the original scenario, I see
>>> little point in your trying to do physics.
>>>
>>
>> I bring this question up because you repeatedly refer to only "one Alice"
>> before the measurement, and also say that Alice and Bob are "in one and the
>> same branch" prior to measurement.  But normal QM without collapse would
>> say Alice and Bob are branching all the time, even before they measure
>> their entangled pair.
>>
>
>
> *They're branching all the time prior to measurement, that is without
> collapse? Pretty fantastic. Where, how, is this affirmed by QM? AG*
>

Collapse is not part of the formalism of QM, so "branching all the time" is
what it affirms. That is the whole point of no-collapse interpretations.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: My final word on the MWI --

2018-07-26 Thread Stathis Papaioannou
On Fri, 27 Jul 2018 at 2:08 am,  wrote:

>
>
> On Thursday, July 26, 2018 at 11:30:11 AM UTC, agrays...@gmail.com wrote:
>>
>>
>>
>> On Thursday, July 26, 2018 at 11:24:42 AM UTC, Quentin Anciaux wrote:
>>>
>>> I still don't get it why some people prefer insulting other people and
>>> their ideas instead of discussing or just stay with their own thoughts and
>>> just say they disagree... What do you gain by saying they are insane,
>>> stupid or whatever?
>>>
>>> It just looks to me childish. So stop doing this, stop writing in 70pt
>>> size red fonts... It's a disfavor to your arguments.
>>>
>>> Quentin
>>>
>>
>> In fact, I DO think it's a mental illness. AG
>>
>
> It's not just wrong, but a gross dysfunction of judgment. Joe the Plumber
> goes into a lab or his closet, shoots a single electron at a slit, and by
> so doing creates uncountable universes, all with copies of himself, replete
> with his memories. Sure. AG
>

You may as well protest on the same basis that the universe can’t be so
wastefully large.

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Is the "bubble multi-verse" and "qm many-worlds" the same thing?

2018-06-16 Thread Stathis Papaioannou
even wrong. AG.
>> >> >
>> >> >
>> >> > Eternal inflation and string theory imply universes created by
>> natural
>> >> > processes. The jury is out on those. OTOH, the MWI has human beings
>> >> > creating
>> >> > universes by going into a lab and doing trivial quantum experiments.
>> Of
>> >> > course they're they same (for idiots). AG
>> >>
>> >> The MWI does not propose that new universes are created specifically
>> >> by certain experiences in the lab. It proposes that this universe
>> >> branching is a fundamental natural mechanism -- that it happens for
>> >> every quantum-level event that we perceive as random from our branch.
>> >> It's an attempt to describe nature by making sense of experimental
>> >> results, the same way as string theory and other theories.
>> >
>> >
>> > Call it what you want, it comes to the same thing; universes created by
>> > trivial quantum experiments by Joe the Plumber.
>>
>> You are using emotionally-charged language to convince yourself that
>> it is absurd: "universes created" and "Joe the Plumber".
>>
>> The MWI only proposes that the universe is even bigger than we can
>> perceive.
>
>
> Incorrect. AG
>
>
>> Joe the Plumber, or Dr. Joe the Prestigious Person, or an
>> amoeba do not "create universes" in some christian god-like sense.
>> They simply find themselves in a certain place, from their first
>> person perspective.
>>
>
> Another universe comes into existence when Joe the Plumber performs, say,
> a spin measurement. If he doesn't do the experiment, that universe would
> NOT come into existence. So it is correct to say that under the MWI
> decisions by human beings create universes. If this isn't absurd hubris, I
> don't know what is. AG
>
>>
>> > This is not only
>> > patently absurd, but DIFFERENT in how they come to be compared to
>> NATURAL
>> > processes proposed by Eternal Inflation and String Theory.
>>
>> I have no idea what you mean here, admittedly by my own ignorance.
>
>
> See above comment. In those other theories, universes may come into
> existence, but the processes are independent of human decisions. The former
> I deem as natural, the latter unnatural. AG
>
>>
>> Maybe a physicist can intervene. I do know that one must be careful
>> with the naturalistic fallacy. What does "natural" mean?
>>
>
> See immediately above. AG
>
>>
>> > Sure, human intuition is often unreliable, particularly in regions far
>> > removed from where our senses operate. But nowadays crap theories
>> > are rationalized on that very basis!
>>
>> Not at all. The "crap" (more emotional language) theories are an
>> attempt to make sense of experimental results. I do not know if the
>> MWI is correct or not, but it is an attempt to explain empirical
>> observations in the simplest way possible.
>>
>> > The world has gone mad, and brilliant
>> > physicists like Susskind have succumbed to the disease. AG
>>
>
I think you have it the wrong way around with the MWI: it does NOT give
consciousness a special privilege. Every possible outcome of a quantum
event is equally real under MWI, whereas according to collapse
interpretations only the single outcome that you, with your special powers,
observe is realised.

-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Mind Uploading

2018-05-07 Thread Stathis Papaioannou
On Thu, 3 May 2018 at 10:42 pm, John Clark  wrote:

All correct arithmetical mathematical calculations may exist in some
> mystical Platonic universe and all possible books may too, but you need
> matter to sort out the correct calculations from the incorrect ones and the
> good books from the gibberish
> ​;​
> matter in the form of a calculating machine in one case and the author
> Jorge Luis Borges in the other.
>

In general, yes: it could be said that there is a computation, but it is of
no use to us. But what if the computation is one that implements a virtual
world with conscious observers? In that case, it is still of no use to us,
but I can't see any reason why the consciousness of those observers should
be dependent on us.

>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-04-04 Thread Stathis Papaioannou
On Thu, 5 Apr 2018 at 2:58 am, smitra <smi...@zonnet.nl> wrote:

> On 02-04-2018 17:27, Bruno Marchal wrote:
> >> On 1 Apr 2018, at 00:29, Lawrence Crowell
> >> <goldenfieldquaterni...@gmail.com> wrote:
> >>
> >> On Saturday, March 31, 2018 at 2:32:06 PM UTC-6, telmo_menezes
> >> wrote:
> >>
> >>> On Sat, Mar 31, 2018 at 10:17 PM, Lawrence Crowell
> >>> <goldenfield...@gmail.com> wrote:
> >>>> You would have to replicate then not only the dynamics of
> >>> neurons, but every
> >>>> biomolecule in the neurons, and don't forget about the
> >>> oligoastrocytes and
> >>>> other glial cells. Many enzymes for instance to multi-state
> >>> systems, say in
> >>>> a simple case where a single amino acid residue of
> >>> phosphorylated or
> >>>> unphosphorylated, and in effect are binary switching units. To
> >>> then make
> >>>> this work you now need to have the brain states mapped out down
> >>> to the
> >>>> molecular level, and further to have their combinatorial
> >>> relationships
> >>>> mapped. Biomolecules also behave in water, so you have to model
> >>> all the
> >>>> water molecules. Given the brain has around 10^{25} or a few
> >>> moles of
> >>>> molecules the number of possible combinations might be on the
> >>> order of
> >>>> 10^{10^{25}} this is a daunting task. Also your computer has to
> >>> accurately
> >>>> encode the dynamics of molecules -- down to the quantum
> >>> mechanics of their
> >>>> bonds.
> >>>>
> >>>> This is another way of saying that biological systems, even that
> >>> of a basic
> >>>> prokaryote, are beyond our current abilities to simulate. You
> >>> can't just
> >>>> hand wave away the enormous problems with just simulating a
> >>> bacillus, let
> >>>> alone something like the brain. Now of course one can do some
> >>> simulations to
> >>>> learn about the brain in a model system, but this is far from
> >>> mapping a
> >>>> brain and its conscious state into a computer.
> >>>
> >>> Well maybe, but this is just you guessing.
> >>> Nobody knows the necessary level of detail.
> >>>
> >>> Telmo.
> >>
> >> Take LSD or psilocybin mushrooms and what enters the brain are
> >> chemical compounds that interact with neural ligand gates. The
> >> effect is a change in the perception of consciousness. Then if we
> >> load coarse grained brain states into a computer that ignores lots
> >> of fine grained detail, will that result in something different?
> >> Hell yeah! The idea one could set up a computer neural network,
> >> upload some data file from a brain scan and that this would be a
> >> completely conscious person is frankly absurd.
> >
> > This means that you bet on a lower substitution level. I guess others
> > have already answered this. Note that the proof that physics is a
> > branch of arithmetic does not put any bound of the graining of the
> > substitution level. It could even be that your brain is the entire
> > universe described at the level of superstring theory, that will
> > change nothing in the conclusion of the reasoning. Yet it would be a
> > threat for evolution and biology as conceived today.
> >
> > Bruno
> >
> >> LC
> >>
>
> In experiments involving stimulation/inhibition of certain brain  parts
> using strong magnetic fields where people look for a few seconds at a
> screen with a large number of dots, it was found that significantly more
> people can correctly guess the number of dots when the field was
> switched on. The conclusion was that under normal circumstances when we
> are not aware of lower level information, such as the exact number of
> dots ion the screen, that information is actually present in the brain
> but we're not consciously aware of it. Certain people who have "savant
> syndrome" can be constantly aware of such lower level information.
>
> This then suggests to me that the substitution level can be taken at a
> much higher level than the level of neurons. In the MWI we would have to
> be imagined being spread out over sectors where information such as the
> number of dots on a screen is different. So, what you're not aware of
> isn't fixed for you, and therefore

Re: How to live forever

2018-04-02 Thread Stathis Papaioannou
On Sun, 1 Apr 2018 at 9:15 pm, John Clark <johnkcl...@gmail.com> wrote:

> On Sun, Apr 1, 2018 at 7:43 AM, Stathis Papaioannou <stath...@gmail.com
> > wrote:
>
> >> It's not the wind its diffusion that send the signal on its way, which
>>> means exactly where the signal is sent is NOT critical and the time it
>>> takes to transmit it can't be critical either. So you think technology will
>>> find that duplicating this meager feat will be insuperably difficult. Why?
>>> Sending a signal with a tiny informational content very very slowly and
>>> successfully hitting a HUGE target seems to me to be the easiest part of
>>> the entire thing.
>>
>>
>
> *>  I don’t think it’s impossible,*
>
> Forget impossible, overall mind uploading might be difficult but the part
> of it that you're talking about would not only be possible it would be
> easy.
>
>> *> but if you want a neural implant to work like the biological
>> equivalent, it must communicate with neurones via neurotransmitters,*
>
> You think only a chemical could send that signal, and specifically only
> the particular chemical that Homo Sapiens happens to use will work? WHY?
>
>> > it must modulate it’s responses according to circulating hormones, it
>> must develop new connections and prune old connections, it must upregulate
>> and downregulate its responsiveness to neurotransmitters according to its
>> history
>
>
>  There are two ways to accomplish this:
>
> 1) A neural net computer could do it directly, and that’s the way it would
> probably be done.
>
> 2) A conventional computer with a Von Neumann architecture could simulate
> a neural net computer, that would slow things way down but if
> Nanotechnology was used the increase in the speed of the hardware would be
> so enormous it would still think faster than you or me.
>

The problem I was alluding to was how to interface with existing biological
systems. You could make a camera that exceeds the performance of the human
eye, but that doesn’t mean you can use it to replace damaged eyes. It would
be easier to use the camera in a robot than a cyborg.
-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-04-01 Thread Stathis Papaioannou
On Sun, 1 Apr 2018 at 8:26 am, John Clark <johnkcl...@gmail.com> wrote:

> On Sat, Mar 31, 2018 at 5:50 PM, Stathis Papaioannou <stath...@gmail.com>
> wrote:
>
> *​> ​The problem is the biological neurones only understand smoke signals.
>> *
>
>
> Not so, we already understand that some neurotransmitters send smoke
> signals that excite neurons while others send a inhibitory signal.
>
> ​> ​
>> Not only that, but the smoke signals change depending on how the wind is
>> blowing,
>
>
> It's not the wind its diffusion that send the signal on its way, which
> means exactly where the signal is sent is* NOT* critical and the time it
> takes to transmit it can't be critical either. So you think technology will
> find that duplicating this meager feat will be insuperably difficult. Why?
> Sending a signal with a tiny informational content very very slowly and
> successfully hitting a HUGE target seems to me to be the easiest part of
> the entire thing.
>

I don’t think it’s impossible, but if you want a neural implant to work
like the biological equivalent, it must communicate with neurones via
neurotransmitters, it must modulate it’s responses according to circulating
hormones, it must develop new connections and prune old connections, it
must upregulate and downregulate its responsiveness to neurotransmitters
according to its history and multiple local factors, and probably other
things that we don’t even know about. So what us needed is not just a
little computer, but complex nanomachinery. It might be easier to simulate
an entire brain than make an implant.

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-31 Thread Stathis Papaioannou
On Sun, 1 Apr 2018 at 2:31 am, John Clark <johnkcl...@gmail.com> wrote:

> On Tue, Mar 27, 2018 at 8:24 PM, Lawrence Crowell <
> goldenfieldquaterni...@gmail.com> wrote:
>
> > *Yes, and if you replace the entire brain with technology the peg leg
>> is expanded into an entire Pinocchio. Would the really be conscious? It is
>> the case as well that so much of our mental processing does involve hormone
>> reception and a range of other data inputs from other receptors and
>> ligands.*
>
> I see nothing sacred in hormones, I don't see the slightest reason why
> they or any neurotransmitter would be especially difficult to simulate
> through computation, because chemical messengers are not a sign of
> sophisticated design on nature's part, rather it's an example of
> Evolution's bungling. If you need to inhibit a nearby neuron there are
> better ways of sending that signal then launching a GABA molecule like a
> message in a bottle thrown into the sea and waiting ages for it to diffuse
> to its random target.
>
> I'm not interested in chemicals only the information they contain, I want
> the information to get transmitted from cell to cell by the best method and
> so I would not send smoke signals if I had a fiber optic cable. The
> information content in each molecular message must be tiny, just a few bits
> because only about 60 neurotransmitters such as acetylcholine,
> norepinephrine and GABA are known, even if the true number is 100 times
> greater (or a million times for that matter) the information content ofeach
> signal must be tiny. Also, for the long range stuff, exactly which neuron
> receives the signal can not be specified because it relies on a random
> process, diffusion. The fact that it's slow as molasses in February does
> not add to its charm.
>

The problem is the biological neurones only understand smoke signals. Not
only that, but the smoke signals change depending on how the wind is
blowing, and so does their meaning. So the computer and fibre optic cable
must ultimately communicate via these smoke signals, unless the entire
network is replaced.

If your job is delivering packages and all the packages are very small and
> your boss doesn't care who you give them to as long as it's on the correct
> continent and you have until the next ice age to get the work done, then
> you don't have a very difficult profession. I see no reason why simulating
> that anachronism  would present the slightest difficulty. Artificial
> neurons could be made to release neurotransmitters as inefficiently as
> natural ones if anybody really wanted to, but it would be pointless when
> there are much faster ways.
>

> Electronics is inherently fast because its electrical signals are sent by
> fast light electrons. The brain also uses some electrical signals, but it
> doesn't use electrons, it uses ions to send signals, the most important are
> chlorine and potassium. A chlorine ion is 65 thousand times as heavy as an
> electron, a potassium ion is even heavier, if you want to talk about gap
> junctions, the ions they use are millions of times more massive than
> electrons. There is no way to get around it, according to the fundamental
> laws of physics, something that has a large mass will be slow, very, very,
> slow.
>
> The great strength biology has over present day electronics is in the
> ability of one neuron to make thousands of connections of various strengths
> with other neurons. However, I see absolutely nothing in the fundamental
> laws of physics that prevents nano machines from doing the same thing, or
> better and MUCH faster.
>
>   John K Clark
>
>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>
-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-27 Thread Stathis Papaioannou
On Wed, 28 Mar 2018 at 10:30 am, Brent Meeker <meeke...@verizon.net> wrote:

>
>
> On 3/27/2018 3:59 PM, Lawrence Crowell wrote:
>
> On Tuesday, March 27, 2018 at 3:56:18 PM UTC-6, Brent wrote:
>>
>>
>>
>> On 3/27/2018 2:26 PM, Stathis Papaioannou wrote:
>>
>>
>> On Wed, 28 Mar 2018 at 7:27 am, Brent Meeker <meek...@verizon.net> wrote:
>>
>>>
>>>
>>> On 3/27/2018 10:19 AM, Stathis Papaioannou wrote:
>>>
>>>
>>> On Wed, 28 Mar 2018 at 1:50 am, Lawrence Crowell <
>>> goldenfield...@gmail.com> wrote:
>>>
>>>> On Tuesday, March 27, 2018 at 7:21:00 AM UTC-5, stathisp wrote:
>>>>
>>>>>
>>>>>
>>>>> On 27 March 2018 at 09:35, Brent Meeker <meek...@verizon.net> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On 3/26/2018 3:19 PM, Stathis Papaioannou wrote:
>>>>>>
>>>>>>
>>>>>> If you are not and never can be aware of it then in what sense is it
>>>>>> consciousness?
>>>>>>
>>>>>>
>>>>>> Depends on what you mean by "it".  I can be aware of my
>>>>>> consciousness, without being aware that it is different than it was 
>>>>>> before;
>>>>>> just as I can be aware of my consciousness without knowing whether it is
>>>>>> the same as yours, or the same as some robot.
>>>>>>
>>>>>
>>>>> If I am given a brain implant to try out for a few days and I notice
>>>>> no difference with the implant (everything feels exactly the same if I
>>>>> switch it in or out of circuit), everyone I know agrees there is no change
>>>>> in me, and every test I do with the implant switched in or out of circuit
>>>>> yields the same results, then I think there would be no good reason to
>>>>> hesitate in saying yes to the implant. If the change it brings about is
>>>>> neither objectively nor subjectively obvious, it isn't a change.
>>>>>
>>>>>
>>>>> --
>>>>> Stathis Papaioannou
>>>>>
>>>>
>>>> This argument ignores scaling. With any network you can replace or
>>>> change nodes and connections on a small scale and the system remains
>>>> largely unchanged. At a certain critical number of such changes the
>>>> properties of the entire network system can rapidly change.
>>>>
>>>
>>> Yes, it is possible that this is the case. What this would mean is that
>>> that the observable behaviour of the system would stay unchanged as it is
>>> replaced from 0 to 100% and so would the consciousness for part of the way,
>>> but at a certain point, when a particular neurone is replaced,
>>> consciousness will suddenly flip on or off or change radically.
>>>
>>>
>>> I think you are overstating that and creating a strawman.  Consciousness
>>> under the influence of drugs for example can change radically, but not
>>> "suddenly flip" with one more molecule of alcohol.
>>>
>>
>> If part of your consciousness changes as your brain is gradually replaced
>> then you would notice but be aware noble to communicate it, which is what
>> it scproblematic. One way out of this would be if your consciousness stayed
>> the same up to a certain point then suddenly flipped. If you suddenly
>> became a zombie you would not notice and not report that anything had
>> changed, so no inconsistency. However, it’s a long stretch to say that
>> consciousness will flip on changing a single molecule in order to save the
>> idea that it is substrate specific.
>>
>>
>> But LC wasn't arguing it was substrate specific.  He was arguing that its
>> scale specific.
>>
>
>> Brent
>>
>
> That was one argument. Also I am not arguing about the dosage of a drug,
> but of some rewiring, removal or replacement of sub-networks.
>
> As for substrate ponder the following question. If you had a stroke and
> were given the option of either a silicon chip system to replace neural
> functioning or neurons derived from your own stem cells, which would you
> choose? The obvious choice would be neurons, for they would most adapt to
> fill in needed function and interact with the rest of the brain.
>
>
> I think it very likely that a silicon neuron could work.  But it would
> work like a peg leg wo

Re: How to live forever

2018-03-27 Thread Stathis Papaioannou
On Wed, 28 Mar 2018 at 9:59 am, Lawrence Crowell <
goldenfieldquaterni...@gmail.com> wrote:

> On Tuesday, March 27, 2018 at 3:56:18 PM UTC-6, Brent wrote:
>
>>
>>
>> On 3/27/2018 2:26 PM, Stathis Papaioannou wrote:
>>
> On Wed, 28 Mar 2018 at 7:27 am, Brent Meeker <meek...@verizon.net> wrote:
>>
>>
>>>
>>> On 3/27/2018 10:19 AM, Stathis Papaioannou wrote:
>>>
>>>
>>> On Wed, 28 Mar 2018 at 1:50 am, Lawrence Crowell <
>>> goldenfield...@gmail.com> wrote:
>>>
>>>> On Tuesday, March 27, 2018 at 7:21:00 AM UTC-5, stathisp wrote:
>>>>
>>>>>
>>>>>
>>>>> On 27 March 2018 at 09:35, Brent Meeker <meek...@verizon.net> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On 3/26/2018 3:19 PM, Stathis Papaioannou wrote:
>>>>>>
>>>>>>
>>>>>> If you are not and never can be aware of it then in what sense is it
>>>>>> consciousness?
>>>>>>
>>>>>>
>>>>>> Depends on what you mean by "it".  I can be aware of my
>>>>>> consciousness, without being aware that it is different than it was 
>>>>>> before;
>>>>>> just as I can be aware of my consciousness without knowing whether it is
>>>>>> the same as yours, or the same as some robot.
>>>>>>
>>>>>
>>>>> If I am given a brain implant to try out for a few days and I notice
>>>>> no difference with the implant (everything feels exactly the same if I
>>>>> switch it in or out of circuit), everyone I know agrees there is no change
>>>>> in me, and every test I do with the implant switched in or out of circuit
>>>>> yields the same results, then I think there would be no good reason to
>>>>> hesitate in saying yes to the implant. If the change it brings about is
>>>>> neither objectively nor subjectively obvious, it isn't a change.
>>>>>
>>>>>
>>>>> --
>>>>> Stathis Papaioannou
>>>>>
>>>>
>>>> This argument ignores scaling. With any network you can replace or
>>>> change nodes and connections on a small scale and the system remains
>>>> largely unchanged. At a certain critical number of such changes the
>>>> properties of the entire network system can rapidly change.
>>>>
>>>
>>> Yes, it is possible that this is the case. What this would mean is that
>>> that the observable behaviour of the system would stay unchanged as it is
>>> replaced from 0 to 100% and so would the consciousness for part of the way,
>>> but at a certain point, when a particular neurone is replaced,
>>> consciousness will suddenly flip on or off or change radically.
>>>
>>>
>>> I think you are overstating that and creating a strawman.  Consciousness
>>> under the influence of drugs for example can change radically, but not
>>> "suddenly flip" with one more molecule of alcohol.
>>>
>>
>> If part of your consciousness changes as your brain is gradually replaced
>> then you would notice but be aware noble to communicate it, which is what
>> it scproblematic. One way out of this would be if your consciousness stayed
>> the same up to a certain point then suddenly flipped. If you suddenly
>> became a zombie you would not notice and not report that anything had
>> changed, so no inconsistency. However, it’s a long stretch to say that
>> consciousness will flip on changing a single molecule in order to save the
>> idea that it is substrate specific.
>>
>>
>> But LC wasn't arguing it was substrate specific.  He was arguing that its
>> scale specific.
>>
>
>> Brent
>>
>
> That was one argument. Also I am not arguing about the dosage of a drug,
> but of some rewiring, removal or replacement of sub-networks.
>
> As for substrate ponder the following question. If you had a stroke and
> were given the option of either a silicon chip system to replace neural
> functioning or neurons derived from your own stem cells, which would you
> choose? The obvious choice would be neurons, for they would most adapt to
> fill in needed function and interact with the rest of the brain.
>

A “silicon chip system” probably wouldn’t be able to fully replace the
function of neurones; to give just one reason, neurones change and make new
connections over time, while c

Re: How to live forever

2018-03-27 Thread Stathis Papaioannou
On Wed, 28 Mar 2018 at 7:27 am, Brent Meeker <meeke...@verizon.net> wrote:

>
>
> On 3/27/2018 10:19 AM, Stathis Papaioannou wrote:
>
>
> On Wed, 28 Mar 2018 at 1:50 am, Lawrence Crowell <
> goldenfieldquaterni...@gmail.com> wrote:
>
>> On Tuesday, March 27, 2018 at 7:21:00 AM UTC-5, stathisp wrote:
>>
>>>
>>>
>>> On 27 March 2018 at 09:35, Brent Meeker <meek...@verizon.net> wrote:
>>>
>>>>
>>>>
>>>> On 3/26/2018 3:19 PM, Stathis Papaioannou wrote:
>>>>
>>>>
>>>> If you are not and never can be aware of it then in what sense is it
>>>> consciousness?
>>>>
>>>>
>>>> Depends on what you mean by "it".  I can be aware of my consciousness,
>>>> without being aware that it is different than it was before; just as I can
>>>> be aware of my consciousness without knowing whether it is the same as
>>>> yours, or the same as some robot.
>>>>
>>>
>>> If I am given a brain implant to try out for a few days and I notice no
>>> difference with the implant (everything feels exactly the same if I switch
>>> it in or out of circuit), everyone I know agrees there is no change in me,
>>> and every test I do with the implant switched in or out of circuit yields
>>> the same results, then I think there would be no good reason to hesitate in
>>> saying yes to the implant. If the change it brings about is neither
>>> objectively nor subjectively obvious, it isn't a change.
>>>
>>>
>>> --
>>> Stathis Papaioannou
>>>
>>
>> This argument ignores scaling. With any network you can replace or change
>> nodes and connections on a small scale and the system remains largely
>> unchanged. At a certain critical number of such changes the properties of
>> the entire network system can rapidly change.
>>
>
> Yes, it is possible that this is the case. What this would mean is that
> that the observable behaviour of the system would stay unchanged as it is
> replaced from 0 to 100% and so would the consciousness for part of the way,
> but at a certain point, when a particular neurone is replaced,
> consciousness will suddenly flip on or off or change radically.
>
>
> I think you are overstating that and creating a strawman.  Consciousness
> under the influence of drugs for example can change radically, but not
> "suddenly flip" with one more molecule of alcohol.
>

If part of your consciousness changes as your brain is gradually replaced
then you would notice but be aware noble to communicate it, which is what
it scproblematic. One way out of this would be if your consciousness stayed
the same up to a certain point then suddenly flipped. If you suddenly
became a zombie you would not notice and not report that anything had
changed, so no inconsistency. However, it’s a long stretch to say that
consciousness will flip on changing a single molecule in order to save the
idea that it is substrate specific.

And since neurones are themselves complex systems, within that neurone
> there will be a particular protein, or a particular atom in the protein
> which when replaced will lead to a flipping of consciousness, while all the
> time behaviour remains unchanged. It’s possible that in the last few
> minutes a cosmic ray has added a neutron to a crucial atom somewhere in
> your brain and this has radically changed your consciousness, but you don’t
> know it and neither does anyone else.
>
> I read the other day about this whole idea of brain uploading. The
>> neurophysiologists are largely rejecting this idea.
>>
>
> Why?
>
>> --
> Stathis Papaioannou
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>
-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-27 Thread Stathis Papaioannou
On Wed, 28 Mar 2018 at 6:13 am, Brent Meeker <meeke...@verizon.net> wrote:

>
>
> On 3/27/2018 5:20 AM, Stathis Papaioannou wrote:
>
>
>
> On 27 March 2018 at 09:35, Brent Meeker <meeke...@verizon.net> wrote:
>
>>
>>
>> On 3/26/2018 3:19 PM, Stathis Papaioannou wrote:
>>
>>
>> If you are not and never can be aware of it then in what sense is it
>> consciousness?
>>
>>
>> Depends on what you mean by "it".  I can be aware of my consciousness,
>> without being aware that it is different than it was before; just as I can
>> be aware of my consciousness without knowing whether it is the same as
>> yours, or the same as some robot.
>>
>
> If I am given a brain implant to try out for a few days and I notice no
> difference with the implant (everything feels exactly the same if I switch
> it in or out of circuit),
>
>
> If it is a whole brain, then switching it will also switch memories and it
> will be impossible for you to say whether or not it "feels the same".
>

The implant replaces part of the brain (to begin with). If it’s the whole
brain you could speculate that the subject would become a zombie and, by
definition, not be aware of it. If it’s part of the brain the rest of the
brain will immediately notice if the change is large enough. If the visual
cortex is taken out by a stroke, the subject says he is blind and behaves
as if he is blind. He still has some visual reflexes, such as the pupillary
response to light, but he describes only what he can perceive, not visual
responses per se. So if it possible to produce a cortical implant that has
the normal I/O behaviour but lacks visual perception or has radically
different visual perception, the subject should notice, like the stroke
patient.
-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-27 Thread Stathis Papaioannou
On Wed, 28 Mar 2018 at 1:50 am, Lawrence Crowell <
goldenfieldquaterni...@gmail.com> wrote:

> On Tuesday, March 27, 2018 at 7:21:00 AM UTC-5, stathisp wrote:
>
>>
>>
>> On 27 March 2018 at 09:35, Brent Meeker <meek...@verizon.net> wrote:
>>
>>>
>>>
>>> On 3/26/2018 3:19 PM, Stathis Papaioannou wrote:
>>>
>>>
>>> If you are not and never can be aware of it then in what sense is it
>>> consciousness?
>>>
>>>
>>> Depends on what you mean by "it".  I can be aware of my consciousness,
>>> without being aware that it is different than it was before; just as I can
>>> be aware of my consciousness without knowing whether it is the same as
>>> yours, or the same as some robot.
>>>
>>
>> If I am given a brain implant to try out for a few days and I notice no
>> difference with the implant (everything feels exactly the same if I switch
>> it in or out of circuit), everyone I know agrees there is no change in me,
>> and every test I do with the implant switched in or out of circuit yields
>> the same results, then I think there would be no good reason to hesitate in
>> saying yes to the implant. If the change it brings about is neither
>> objectively nor subjectively obvious, it isn't a change.
>>
>>
>> --
>> Stathis Papaioannou
>>
>
> This argument ignores scaling. With any network you can replace or change
> nodes and connections on a small scale and the system remains largely
> unchanged. At a certain critical number of such changes the properties of
> the entire network system can rapidly change.
>

Yes, it is possible that this is the case. What this would mean is that
that the observable behaviour of the system would stay unchanged as it is
replaced from 0 to 100% and so would the consciousness for part of the way,
but at a certain point, when a particular neurone is replaced,
consciousness will suddenly flip on or off or change radically. And since
neurones are themselves complex systems, within that neurone there will be
a particular protein, or a particular atom in the protein which when
replaced will lead to a flipping of consciousness, while all the time
behaviour remains unchanged. It’s possible that in the last few minutes a
cosmic ray has added a neutron to a crucial atom somewhere in your brain
and this has radically changed your consciousness, but you don’t know it
and neither does anyone else.

I read the other day about this whole idea of brain uploading. The
> neurophysiologists are largely rejecting this idea.
>

Why?

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Mind Uploading and NP-completeness

2018-03-27 Thread Stathis Papaioannou
On Tue, 27 Mar 2018 at 8:08 am, John Clark <johnkcl...@gmail.com> wrote:

> On Mon, Mar 26, 2018 at 4:19 PM, Brent Meeker <meeke...@verizon.net>
> wrote:
>
> ​>​
>>  does equal intelligence imply equivalent consciousness.
>>
>
> ​
> I don't know and never will. All I know for sure is my consciousness
> changes when my brain changes and when my brain changes my consciousness
> changes; after that all I can do is hope that my extrapolation that your
> brain is involved with consciousness too is valid.
>
>
>> ​> ​
>> how do I know that other people have consciousness like mine
>>
>
> ​You don't KNOW that but you have to assume that if you wish to function
> in the world.​
>
>
> * ​> ​except in that case one relies in part on knowing that other people
>> are constructed similarly.*
>
>
> Similar? That just begs the question, which differences are important and
> which ones are not? Is the color of your skin the most important thing that
> determines the quality of your consciousness, or is it your sex, or the
> fact that your brain is
> ​made ​
> mostly of carbon and not silicon? I have a strong hunch the most
> determination of consciousness is not what your brain is made of but how it
> handles information, but I will never be able to prove it.
>

I think it can be proved. If it is false then it leads to the situation
where, as Brent has said, your consciousness changes but you don’t notice
it. An imperceptible change in consciousness is, by definition, as good as
no change. If an MP3 file is imperceptibly different from the original
uncompressed audio file then, for the purpose of listening to music, it is
equivalent.

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-27 Thread Stathis Papaioannou
On 27 March 2018 at 09:35, Brent Meeker <meeke...@verizon.net> wrote:

>
>
> On 3/26/2018 3:19 PM, Stathis Papaioannou wrote:
>
>
> If you are not and never can be aware of it then in what sense is it
> consciousness?
>
>
> Depends on what you mean by "it".  I can be aware of my consciousness,
> without being aware that it is different than it was before; just as I can
> be aware of my consciousness without knowing whether it is the same as
> yours, or the same as some robot.
>

If I am given a brain implant to try out for a few days and I notice no
difference with the implant (everything feels exactly the same if I switch
it in or out of circuit), everyone I know agrees there is no change in me,
and every test I do with the implant switched in or out of circuit yields
the same results, then I think there would be no good reason to hesitate in
saying yes to the implant. If the change it brings about is neither
objectively nor subjectively obvious, it isn't a change.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-26 Thread Stathis Papaioannou
On Tue, 27 Mar 2018 at 6:40 am, Brent Meeker <meeke...@verizon.net> wrote:

>
>
> On 3/26/2018 8:24 AM, Stathis Papaioannou wrote:
>
> On 26 March 2018 at 15:20, Brent Meeker <meeke...@verizon.net> wrote:
>
>>
>>
>> On 3/25/2018 7:14 PM, Stathis Papaioannou wrote:
>>
>>
>>
>> On 26 March 2018 at 04:57, Brent Meeker <meeke...@verizon.net> wrote:
>>
>>>
>>>
>>> On 3/25/2018 2:15 AM, Bruno Marchal wrote:
>>>
>>>
>>> On 21 Mar 2018, at 22:56, Brent Meeker <meeke...@verizon.net> wrote:
>>>
>>>
>>>
>>> On 3/21/2018 2:27 PM, Stathis Papaioannou wrote:
>>>
>>>
>>> On Thu, 22 Mar 2018 at 5:45 am, Brent Meeker <meeke...@verizon.net>
>>> wrote:
>>>
>>>>
>>>>
>>>> On 3/20/2018 11:29 PM, Stathis Papaioannou wrote:
>>>>
>>>> On Wed, 21 Mar 2018 at 9:03 am, Brent Meeker <meeke...@verizon.net>
>>>> wrote:
>>>>
>>>>>
>>>>>
>>>>> On 3/20/2018 1:14 PM, Stathis Papaioannou wrote:
>>>>>
>>>>>
>>>>> On Wed, 21 Mar 2018 at 6:34 am, Brent Meeker <meeke...@verizon.net>
>>>>> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On 3/20/2018 3:58 AM, Telmo Menezes wrote:
>>>>>>
>>>>>> The interesting thing is that you can draw conclusions about 
>>>>>> consciousness
>>>>>> without being able to define it or detect it.
>>>>>>
>>>>>> I agree.
>>>>>>
>>>>>>
>>>>>> The claim is that IF an entity
>>>>>> is conscious THEN its consciousness will be preserved if brain function 
>>>>>> is
>>>>>> preserved despite changing the brain substrate.
>>>>>>
>>>>>> Ok, this is computationalism. I also bet on computationalism, but I
>>>>>> think we must proceed with caution and not forget that we are just
>>>>>> assuming this to be true. Your thought experiment is convincing but is
>>>>>> not a proof. You do expose something that I agree with: that
>>>>>> non-computationalism sounds silly.
>>>>>>
>>>>>> But does it sound so silly if we propose substituting a completely
>>>>>> different kind of computer, e.g. von Neumann architecture or one that 
>>>>>> just
>>>>>> records everything instead of an episodic associative memory, for the
>>>>>> brain.  The Church-Turing conjecture says it can compute the same
>>>>>> functions.  But does it instantiate the same consciousness.  My intuition
>>>>>> is that it would be "conscious" but in some different way; for example by
>>>>>> having the kind of memory you would have if you could review of a movie 
>>>>>> of
>>>>>> any interval in your past.
>>>>>>
>>>>>
>>>>> I think it would be conscious in the same way if you replaced neural
>>>>> tissue with a black box that interacted with the surrounding tissue in the
>>>>> same way. It doesn’t matter what is in the black box; it could even work 
>>>>> by
>>>>> magic.
>>>>>
>>>>>
>>>>> Then why draw the line at "surrounding tissue".  Why not the external
>>>>> enivironment?
>>>>>
>>>>
>>>> Keep expanding the part that is replaced and you replace the whole
>>>> brain and the whole organism.
>>>>
>>>> Are you saying you can't imagine being "conscious" but in a different
>>>>> way?
>>>>>
>>>>
>>>> I think it is possible but I don’t think it could happen if my neurones
>>>> were replaced by a functionally equivalent component. If it’s functionally
>>>> equivalent, my behaviour would be unchanged,
>>>>
>>>>
>>>> I agree with that.  But you've already supposed that functional
>>>> equivalence at the behavior level implies preservation of consciousness.
>>>> So what I'm considering is replacements in the brain far above the neuron
>>>> level, say at the level of whole functional groups of the brain, e.g. the
>>>> visual system, the auditory system, 

Re: How to live forever

2018-03-26 Thread Stathis Papaioannou
On 26 March 2018 at 15:20, Brent Meeker <meeke...@verizon.net> wrote:

>
>
> On 3/25/2018 7:14 PM, Stathis Papaioannou wrote:
>
>
>
> On 26 March 2018 at 04:57, Brent Meeker <meeke...@verizon.net> wrote:
>
>>
>>
>> On 3/25/2018 2:15 AM, Bruno Marchal wrote:
>>
>>
>> On 21 Mar 2018, at 22:56, Brent Meeker <meeke...@verizon.net> wrote:
>>
>>
>>
>> On 3/21/2018 2:27 PM, Stathis Papaioannou wrote:
>>
>>
>> On Thu, 22 Mar 2018 at 5:45 am, Brent Meeker <meeke...@verizon.net>
>> wrote:
>>
>>>
>>>
>>> On 3/20/2018 11:29 PM, Stathis Papaioannou wrote:
>>>
>>> On Wed, 21 Mar 2018 at 9:03 am, Brent Meeker <meeke...@verizon.net>
>>> wrote:
>>>
>>>>
>>>>
>>>> On 3/20/2018 1:14 PM, Stathis Papaioannou wrote:
>>>>
>>>>
>>>> On Wed, 21 Mar 2018 at 6:34 am, Brent Meeker <meeke...@verizon.net>
>>>> wrote:
>>>>
>>>>>
>>>>>
>>>>> On 3/20/2018 3:58 AM, Telmo Menezes wrote:
>>>>>
>>>>> The interesting thing is that you can draw conclusions about consciousness
>>>>> without being able to define it or detect it.
>>>>>
>>>>> I agree.
>>>>>
>>>>>
>>>>> The claim is that IF an entity
>>>>> is conscious THEN its consciousness will be preserved if brain function is
>>>>> preserved despite changing the brain substrate.
>>>>>
>>>>> Ok, this is computationalism. I also bet on computationalism, but I
>>>>> think we must proceed with caution and not forget that we are just
>>>>> assuming this to be true. Your thought experiment is convincing but is
>>>>> not a proof. You do expose something that I agree with: that
>>>>> non-computationalism sounds silly.
>>>>>
>>>>> But does it sound so silly if we propose substituting a completely
>>>>> different kind of computer, e.g. von Neumann architecture or one that just
>>>>> records everything instead of an episodic associative memory, for the
>>>>> brain.  The Church-Turing conjecture says it can compute the same
>>>>> functions.  But does it instantiate the same consciousness.  My intuition
>>>>> is that it would be "conscious" but in some different way; for example by
>>>>> having the kind of memory you would have if you could review of a movie of
>>>>> any interval in your past.
>>>>>
>>>>
>>>> I think it would be conscious in the same way if you replaced neural
>>>> tissue with a black box that interacted with the surrounding tissue in the
>>>> same way. It doesn’t matter what is in the black box; it could even work by
>>>> magic.
>>>>
>>>>
>>>> Then why draw the line at "surrounding tissue".  Why not the external
>>>> enivironment?
>>>>
>>>
>>> Keep expanding the part that is replaced and you replace the whole brain
>>> and the whole organism.
>>>
>>> Are you saying you can't imagine being "conscious" but in a different
>>>> way?
>>>>
>>>
>>> I think it is possible but I don’t think it could happen if my neurones
>>> were replaced by a functionally equivalent component. If it’s functionally
>>> equivalent, my behaviour would be unchanged,
>>>
>>>
>>> I agree with that.  But you've already supposed that functional
>>> equivalence at the behavior level implies preservation of consciousness.
>>> So what I'm considering is replacements in the brain far above the neuron
>>> level, say at the level of whole functional groups of the brain, e.g. the
>>> visual system, the auditory system, the memory,...  Would functional
>>> equivalence at the body/brain interface then still imply consciousness
>>> equivalence?
>>>
>>
>> I think it would, because I don’t think there are isolated consciousness
>> modules in the brain. A large enough change in visual experience will be
>> noticed by the subject, who will report that things look different. This
>> could only happen if there is a change in the input to the language system
>> from the visual system; but we have assumed that the output from the visual
>> system is the same, and only the consciousness has changed, leading to a
>> c

Re: How to live forever

2018-03-25 Thread Stathis Papaioannou
On 26 March 2018 at 04:57, Brent Meeker <meeke...@verizon.net> wrote:

>
>
> On 3/25/2018 2:15 AM, Bruno Marchal wrote:
>
>
> On 21 Mar 2018, at 22:56, Brent Meeker <meeke...@verizon.net> wrote:
>
>
>
> On 3/21/2018 2:27 PM, Stathis Papaioannou wrote:
>
>
> On Thu, 22 Mar 2018 at 5:45 am, Brent Meeker <meeke...@verizon.net> wrote:
>
>>
>>
>> On 3/20/2018 11:29 PM, Stathis Papaioannou wrote:
>>
>> On Wed, 21 Mar 2018 at 9:03 am, Brent Meeker <meeke...@verizon.net>
>> wrote:
>>
>>>
>>>
>>> On 3/20/2018 1:14 PM, Stathis Papaioannou wrote:
>>>
>>>
>>> On Wed, 21 Mar 2018 at 6:34 am, Brent Meeker <meeke...@verizon.net>
>>> wrote:
>>>
>>>>
>>>>
>>>> On 3/20/2018 3:58 AM, Telmo Menezes wrote:
>>>>
>>>> The interesting thing is that you can draw conclusions about consciousness
>>>> without being able to define it or detect it.
>>>>
>>>> I agree.
>>>>
>>>>
>>>> The claim is that IF an entity
>>>> is conscious THEN its consciousness will be preserved if brain function is
>>>> preserved despite changing the brain substrate.
>>>>
>>>> Ok, this is computationalism. I also bet on computationalism, but I
>>>> think we must proceed with caution and not forget that we are just
>>>> assuming this to be true. Your thought experiment is convincing but is
>>>> not a proof. You do expose something that I agree with: that
>>>> non-computationalism sounds silly.
>>>>
>>>> But does it sound so silly if we propose substituting a completely
>>>> different kind of computer, e.g. von Neumann architecture or one that just
>>>> records everything instead of an episodic associative memory, for the
>>>> brain.  The Church-Turing conjecture says it can compute the same
>>>> functions.  But does it instantiate the same consciousness.  My intuition
>>>> is that it would be "conscious" but in some different way; for example by
>>>> having the kind of memory you would have if you could review of a movie of
>>>> any interval in your past.
>>>>
>>>
>>> I think it would be conscious in the same way if you replaced neural
>>> tissue with a black box that interacted with the surrounding tissue in the
>>> same way. It doesn’t matter what is in the black box; it could even work by
>>> magic.
>>>
>>>
>>> Then why draw the line at "surrounding tissue".  Why not the external
>>> enivironment?
>>>
>>
>> Keep expanding the part that is replaced and you replace the whole brain
>> and the whole organism.
>>
>> Are you saying you can't imagine being "conscious" but in a different way?
>>>
>>
>> I think it is possible but I don’t think it could happen if my neurones
>> were replaced by a functionally equivalent component. If it’s functionally
>> equivalent, my behaviour would be unchanged,
>>
>>
>> I agree with that.  But you've already supposed that functional
>> equivalence at the behavior level implies preservation of consciousness.
>> So what I'm considering is replacements in the brain far above the neuron
>> level, say at the level of whole functional groups of the brain, e.g. the
>> visual system, the auditory system, the memory,...  Would functional
>> equivalence at the body/brain interface then still imply consciousness
>> equivalence?
>>
>
> I think it would, because I don’t think there are isolated consciousness
> modules in the brain. A large enough change in visual experience will be
> noticed by the subject, who will report that things look different. This
> could only happen if there is a change in the input to the language system
> from the visual system; but we have assumed that the output from the visual
> system is the same, and only the consciousness has changed, leading to a
> contradiction.
>
>
> But what about internal systems which are independent of perception...the
> very reason Bruno wants to talk about dream states.  And I'm not
> necessarily asking that behavior be identical...just that the body/brain
> interface be the same.  The "brain" may be different in how it processes
> input from the eyeballs and hence report verbally different perceptions.
> In other words, I'm wondering how much does computationalism constrain
> consciousness.  My intuition is that there could be a lot of difference in
> consci

Re: How to live forever

2018-03-25 Thread Stathis Papaioannou
On 25 March 2018 at 20:18, Bruno Marchal <marc...@ulb.ac.be> wrote:

>
> On 21 Mar 2018, at 23:49, Stathis Papaioannou <stath...@gmail.com> wrote:
>
>
> On Thu, 22 Mar 2018 at 9:02 am, Bruce Kellett <bhkell...@optusnet.com.au>
> wrote:
>
>> From: Stathis Papaioannou <stath...@gmail.com>
>>
>>
>> On Wed, 21 Mar 2018 at 10:56 am, Bruce Kellett <bhkell...@optusnet.com.au>
>> wrote:
>>
>>> From: Stathis Papaioannou < <stath...@gmail.com>stath...@gmail.com>
>>>
>>> On Tue, 20 Mar 2018 at 10:09 am, Bruce Kellett <
>>> <bhkell...@optusnet.com.au>bhkell...@optusnet.com.au> wrote:
>>>
>>>>
>>>> If the theory is that if the observable behaviour of the brain is
>>>> replicated, then consciousness will also be replicated, then the clear
>>>> corollary is that consciousness can be inferred from observable behaviour.
>>>> Which implies that I can be as certain of the consciousness of other people
>>>> as I am of my own. This seems to do some violence to the 1p/1pp/3p
>>>> distinctions that computationalism rely on so much: only 1p is "certainly
>>>> certain". But if I can reliably infer consciousness in others, then other
>>>> things can be as certain as 1p experiences
>>>>
>>>
>>> You can’t reliable infer consciousness in others. What you can infer is
>>> that whatever consciousness an entity has, it will be preserved if
>>> functionally identical substitutions in its brain are made.
>>>
>>>
>>> You have that backwards. You can infer consciousness in others, by
>>> observing their behaviour. The alternative would be solipsism. Now, while
>>> you can't prove or disprove solipsism in a mathematical sense, you can
>>> reject solipsism as a useless theory, since it tells you nothing about
>>> anything. Whereas science acts on the available evidence -- observations of
>>> behaviour in this case.
>>>
>>> But we have no evidence that consciousness would be preserved under
>>> functionally identical substitutions in the brain. Consciousness may be a
>>> global affair, so functionally equivalence may not be achievable, or even
>>> definable, within the context of a conscious brain. Can you map the
>>> functionality of even a single neuron? You are assuming that you can, but
>>> if that function is global, then you probably can't. There is a fair amount
>>> of glibness in your assumption that consciousness will be preserved under
>>> such substitutions.
>>>
>>>
>>>
>>> You can’t know if a mouse is conscious, but you can know that if mouse
>>> neurones are replaced with functionally identical electronic neurones its
>>> behaviour will be the same and any consciousness it may have will also be
>>> the same.
>>>
>>>
>>> You cannot know this without actually doing the substitution and
>>> observing the results.
>>>
>>
>> So do you think that it is possible to replace the neurones with
>> functionally identical neurones (same output for same input) and the
>> mouse’s behaviour would *not* be the same?
>>
>>
>> Individual neurons may not be the appropriate functional unit.
>>
>> It seems that you might be close to circularity -- neural functionality
>> includes consciousness. So if I maintain neural functionality, I will
>> maintain consciousness.
>>
>
> The only assumption is that the brain is somehow responsible for
> consciousness.
>
>
> Consciousness is an attribute of the abstract immaterial person. The
> locally material brain is only responsible for the relative manifestation
> of consciousness. The computations does not create consciousness, but
> channel its possible differentiation. But that should not change your point.
>

But you start off with the assumption that replacing your brain with a
machine will preserve consciousness - "comp". From this assumption, the
rest follows, including the conclusion that there isn't actually a primary
physical brain.

> The argument I am making is that if any part of the brain is replaced with
> a functionally identical non-biological part, engineered to replicate its
> interactions with the surrounding tissue,  consciousness will also
> necessarily be replicated; for if not, an absurd situation would result,
> whereby consciousness can radically change but the subject not notice, or
> consciousness decouple completely from behaviour, or consciousness flip on
> or off with the chan

Re: How to live forever

2018-03-22 Thread Stathis Papaioannou
On Fri, 23 Mar 2018 at 11:32 am, Bruce Kellett <bhkell...@optusnet.com.au>
wrote:

> From: Stathis Papaioannou <stath...@gmail.com>
>
>
> On Thu, 22 Mar 2018 at 9:02 am, Bruce Kellett <bhkell...@optusnet.com.au>
> wrote:
>
>> From: Stathis Papaioannou < <stath...@gmail.com>stath...@gmail.com>
>>
>>
>> On Wed, 21 Mar 2018 at 10:56 am, Bruce Kellett <
>> <bhkell...@optusnet.com.au>bhkell...@optusnet.com.au> wrote:
>>
>>> From: Stathis Papaioannou < <stath...@gmail.com>stath...@gmail.com>
>>>
>>> On Tue, 20 Mar 2018 at 10:09 am, Bruce Kellett <
>>> <bhkell...@optusnet.com.au>bhkell...@optusnet.com.au> wrote:
>>>
>>>>
>>>> If the theory is that if the observable behaviour of the brain is
>>>> replicated, then consciousness will also be replicated, then the clear
>>>> corollary is that consciousness can be inferred from observable behaviour.
>>>> Which implies that I can be as certain of the consciousness of other people
>>>> as I am of my own. This seems to do some violence to the 1p/1pp/3p
>>>> distinctions that computationalism rely on so much: only 1p is "certainly
>>>> certain". But if I can reliably infer consciousness in others, then other
>>>> things can be as certain as 1p experiences
>>>>
>>>
>>> You can’t reliable infer consciousness in others. What you can infer is
>>> that whatever consciousness an entity has, it will be preserved if
>>> functionally identical substitutions in its brain are made.
>>>
>>>
>>> You have that backwards. You can infer consciousness in others, by
>>> observing their behaviour. The alternative would be solipsism. Now, while
>>> you can't prove or disprove solipsism in a mathematical sense, you can
>>> reject solipsism as a useless theory, since it tells you nothing about
>>> anything. Whereas science acts on the available evidence -- observations of
>>> behaviour in this case.
>>>
>>> But we have no evidence that consciousness would be preserved under
>>> functionally identical substitutions in the brain. Consciousness may be a
>>> global affair, so functionally equivalence may not be achievable, or even
>>> definable, within the context of a conscious brain. Can you map the
>>> functionality of even a single neuron? You are assuming that you can, but
>>> if that function is global, then you probably can't. There is a fair amount
>>> of glibness in your assumption that consciousness will be preserved under
>>> such substitutions.
>>>
>>>
>>> You can’t know if a mouse is conscious, but you can know that if mouse
>>> neurones are replaced with functionally identical electronic neurones its
>>> behaviour will be the same and any consciousness it may have will also be
>>> the same.
>>>
>>>
>>> You cannot know this without actually doing the substitution and
>>> observing the results.
>>>
>>
>> So do you think that it is possible to replace the neurones with
>> functionally identical neurones (same output for same input) and the
>> mouse’s behaviour would *not* be the same?
>>
>>
>> Individual neurons may not be the appropriate functional unit.
>>
>> It seems that you might be close to circularity -- neural functionality
>> includes consciousness. So if I maintain neural functionality, I will
>> maintain consciousness.
>>
>
> The only assumption is that the brain is somehow responsible for
> consciousness. The argument I am making is that if any part of the brain is
> replaced with a functionally identical non-biological part, engineered to
> replicate its interactions with the surrounding tissue,  consciousness will
> also necessarily be replicated; for if not, an absurd situation would
> result, whereby consciousness can radically change but the subject not
> notice, or consciousness decouple completely from behaviour, or
> consciousness flip on or off with the change of one subatomic particle.
>
>
> There still seems to be some circularity there -- consciousness is part of
> the functionality of the brain, or parts thereof, so maintaining
> functionality requires maintenance of consciousness.
>

By functionality here I specifically mean the observable behaviour of the
brain. Consciousness is special in that it is not directly observable as,
for example, the potential difference across a cell membrane or the
contraction of muscle is.

One would really need some independent measure of

Re: How to live forever

2018-03-21 Thread Stathis Papaioannou
On Thu, 22 Mar 2018 at 8:57 am, Brent Meeker <meeke...@verizon.net> wrote:

>
>
> On 3/21/2018 2:27 PM, Stathis Papaioannou wrote:
>
>
> On Thu, 22 Mar 2018 at 5:45 am, Brent Meeker <meeke...@verizon.net> wrote:
>
>>
>>
>> On 3/20/2018 11:29 PM, Stathis Papaioannou wrote:
>>
>> On Wed, 21 Mar 2018 at 9:03 am, Brent Meeker <meeke...@verizon.net>
>> wrote:
>>
>>>
>>>
>>> On 3/20/2018 1:14 PM, Stathis Papaioannou wrote:
>>>
>>>
>>> On Wed, 21 Mar 2018 at 6:34 am, Brent Meeker <meeke...@verizon.net>
>>> wrote:
>>>
>>>>
>>>>
>>>> On 3/20/2018 3:58 AM, Telmo Menezes wrote:
>>>>
>>>> The interesting thing is that you can draw conclusions about consciousness
>>>> without being able to define it or detect it.
>>>>
>>>> I agree.
>>>>
>>>>
>>>> The claim is that IF an entity
>>>> is conscious THEN its consciousness will be preserved if brain function is
>>>> preserved despite changing the brain substrate.
>>>>
>>>> Ok, this is computationalism. I also bet on computationalism, but I
>>>> think we must proceed with caution and not forget that we are just
>>>> assuming this to be true. Your thought experiment is convincing but is
>>>> not a proof. You do expose something that I agree with: that
>>>> non-computationalism sounds silly.
>>>>
>>>> But does it sound so silly if we propose substituting a completely
>>>> different kind of computer, e.g. von Neumann architecture or one that just
>>>> records everything instead of an episodic associative memory, for the
>>>> brain.  The Church-Turing conjecture says it can compute the same
>>>> functions.  But does it instantiate the same consciousness.  My intuition
>>>> is that it would be "conscious" but in some different way; for example by
>>>> having the kind of memory you would have if you could review of a movie of
>>>> any interval in your past.
>>>>
>>>
>>> I think it would be conscious in the same way if you replaced neural
>>> tissue with a black box that interacted with the surrounding tissue in the
>>> same way. It doesn’t matter what is in the black box; it could even work by
>>> magic.
>>>
>>>
>>> Then why draw the line at "surrounding tissue".  Why not the external
>>> enivironment?
>>>
>>
>> Keep expanding the part that is replaced and you replace the whole brain
>> and the whole organism.
>>
>> Are you saying you can't imagine being "conscious" but in a different way?
>>>
>>
>> I think it is possible but I don’t think it could happen if my neurones
>> were replaced by a functionally equivalent component. If it’s functionally
>> equivalent, my behaviour would be unchanged,
>>
>>
>> I agree with that.  But you've already supposed that functional
>> equivalence at the behavior level implies preservation of consciousness.
>> So what I'm considering is replacements in the brain far above the neuron
>> level, say at the level of whole functional groups of the brain, e.g. the
>> visual system, the auditory system, the memory,...  Would functional
>> equivalence at the body/brain interface then still imply consciousness
>> equivalence?
>>
>
> I think it would, because I don’t think there are isolated consciousness
> modules in the brain. A large enough change in visual experience will be
> noticed by the subject, who will report that things look different. This
> could only happen if there is a change in the input to the language system
> from the visual system; but we have assumed that the output from the visual
> system is the same, and only the consciousness has changed, leading to a
> contradiction.
>
>
> But what about internal systems which are independent of perception...the
> very reason Bruno wants to talk about dream states.  And I'm not
> necessarily asking that behavior be identical...just that the body/brain
> interface be the same.  The "brain" may be different in how it processes
> input from the eyeballs and hence report verbally different perceptions.
> In other words, I'm wondering how much does computationalism constrain
> consciousness.  My intuition is that there could be a lot of difference in
> consciousness depending on how different perceptual inputs are process
> and/or merged and how internal simulations are handled.  To take a cru

Re: How to live forever

2018-03-21 Thread Stathis Papaioannou
On Thu, 22 Mar 2018 at 9:02 am, Bruce Kellett <bhkell...@optusnet.com.au>
wrote:

> From: Stathis Papaioannou <stath...@gmail.com>
>
>
> On Wed, 21 Mar 2018 at 10:56 am, Bruce Kellett <bhkell...@optusnet.com.au>
> wrote:
>
>> From: Stathis Papaioannou < <stath...@gmail.com>stath...@gmail.com>
>>
>> On Tue, 20 Mar 2018 at 10:09 am, Bruce Kellett <
>> <bhkell...@optusnet.com.au>bhkell...@optusnet.com.au> wrote:
>>
>>>
>>> If the theory is that if the observable behaviour of the brain is
>>> replicated, then consciousness will also be replicated, then the clear
>>> corollary is that consciousness can be inferred from observable behaviour.
>>> Which implies that I can be as certain of the consciousness of other people
>>> as I am of my own. This seems to do some violence to the 1p/1pp/3p
>>> distinctions that computationalism rely on so much: only 1p is "certainly
>>> certain". But if I can reliably infer consciousness in others, then other
>>> things can be as certain as 1p experiences
>>>
>>
>> You can’t reliable infer consciousness in others. What you can infer is
>> that whatever consciousness an entity has, it will be preserved if
>> functionally identical substitutions in its brain are made.
>>
>>
>> You have that backwards. You can infer consciousness in others, by
>> observing their behaviour. The alternative would be solipsism. Now, while
>> you can't prove or disprove solipsism in a mathematical sense, you can
>> reject solipsism as a useless theory, since it tells you nothing about
>> anything. Whereas science acts on the available evidence -- observations of
>> behaviour in this case.
>>
>> But we have no evidence that consciousness would be preserved under
>> functionally identical substitutions in the brain. Consciousness may be a
>> global affair, so functionally equivalence may not be achievable, or even
>> definable, within the context of a conscious brain. Can you map the
>> functionality of even a single neuron? You are assuming that you can, but
>> if that function is global, then you probably can't. There is a fair amount
>> of glibness in your assumption that consciousness will be preserved under
>> such substitutions.
>>
>>
>>
>> You can’t know if a mouse is conscious, but you can know that if mouse
>> neurones are replaced with functionally identical electronic neurones its
>> behaviour will be the same and any consciousness it may have will also be
>> the same.
>>
>>
>> You cannot know this without actually doing the substitution and
>> observing the results.
>>
>
> So do you think that it is possible to replace the neurones with
> functionally identical neurones (same output for same input) and the
> mouse’s behaviour would *not* be the same?
>
>
> Individual neurons may not be the appropriate functional unit.
>
> It seems that you might be close to circularity -- neural functionality
> includes consciousness. So if I maintain neural functionality, I will
> maintain consciousness.
>

The only assumption is that the brain is somehow responsible for
consciousness. The argument I am making is that if any part of the brain is
replaced with a functionally identical non-biological part, engineered to
replicate its interactions with the surrounding tissue,  consciousness will
also necessarily be replicated; for if not, an absurd situation would
result, whereby consciousness can radically change but the subject not
notice, or consciousness decouple completely from behaviour, or
consciousness flip on or off with the change of one subatomic particle.

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-21 Thread Stathis Papaioannou
On Wed, 21 Mar 2018 at 10:56 am, Bruce Kellett <bhkell...@optusnet.com.au>
wrote:

> From: Stathis Papaioannou <stath...@gmail.com>
>
> On Tue, 20 Mar 2018 at 10:09 am, Bruce Kellett <bhkell...@optusnet.com.au>
> wrote:
>
>>
>> If the theory is that if the observable behaviour of the brain is
>> replicated, then consciousness will also be replicated, then the clear
>> corollary is that consciousness can be inferred from observable behaviour.
>> Which implies that I can be as certain of the consciousness of other people
>> as I am of my own. This seems to do some violence to the 1p/1pp/3p
>> distinctions that computationalism rely on so much: only 1p is "certainly
>> certain". But if I can reliably infer consciousness in others, then other
>> things can be as certain as 1p experiences
>>
>
> You can’t reliable infer consciousness in others. What you can infer is
> that whatever consciousness an entity has, it will be preserved if
> functionally identical substitutions in its brain are made.
>
>
> You have that backwards. You can infer consciousness in others, by
> observing their behaviour. The alternative would be solipsism. Now, while
> you can't prove or disprove solipsism in a mathematical sense, you can
> reject solipsism as a useless theory, since it tells you nothing about
> anything. Whereas science acts on the available evidence -- observations of
> behaviour in this case.
>
> But we have no evidence that consciousness would be preserved under
> functionally identical substitutions in the brain. Consciousness may be a
> global affair, so functionally equivalence may not be achievable, or even
> definable, within the context of a conscious brain. Can you map the
> functionality of even a single neuron? You are assuming that you can, but
> if that function is global, then you probably can't. There is a fair amount
> of glibness in your assumption that consciousness will be preserved under
> such substitutions.
>
>
>
> You can’t know if a mouse is conscious, but you can know that if mouse
> neurones are replaced with functionally identical electronic neurones its
> behaviour will be the same and any consciousness it may have will also be
> the same.
>
>
> You cannot know this without actually doing the substitution and observing
> the results.
>

So do you think that it is possible to replace the neurones with
functionally identical neurones (same output for same input) and the
mouse’s behaviour would *not* be the same?
-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-21 Thread Stathis Papaioannou
On Wed, 21 Mar 2018 at 9:03 am, Brent Meeker <meeke...@verizon.net> wrote:

>
>
> On 3/20/2018 1:14 PM, Stathis Papaioannou wrote:
>
>
> On Wed, 21 Mar 2018 at 6:34 am, Brent Meeker <meeke...@verizon.net> wrote:
>
>>
>>
>> On 3/20/2018 3:58 AM, Telmo Menezes wrote:
>>
>> The interesting thing is that you can draw conclusions about consciousness
>> without being able to define it or detect it.
>>
>> I agree.
>>
>>
>> The claim is that IF an entity
>> is conscious THEN its consciousness will be preserved if brain function is
>> preserved despite changing the brain substrate.
>>
>> Ok, this is computationalism. I also bet on computationalism, but I
>> think we must proceed with caution and not forget that we are just
>> assuming this to be true. Your thought experiment is convincing but is
>> not a proof. You do expose something that I agree with: that
>> non-computationalism sounds silly.
>>
>> But does it sound so silly if we propose substituting a completely
>> different kind of computer, e.g. von Neumann architecture or one that just
>> records everything instead of an episodic associative memory, for the
>> brain.  The Church-Turing conjecture says it can compute the same
>> functions.  But does it instantiate the same consciousness.  My intuition
>> is that it would be "conscious" but in some different way; for example by
>> having the kind of memory you would have if you could review of a movie of
>> any interval in your past.
>>
>
> I think it would be conscious in the same way if you replaced neural
> tissue with a black box that interacted with the surrounding tissue in the
> same way. It doesn’t matter what is in the black box; it could even work by
> magic.
>
>
> Then why draw the line at "surrounding tissue".  Why not the external
> enivironment?
>

Keep expanding the part that is replaced and you replace the whole brain
and the whole organism.

Are you saying you can't imagine being "conscious" but in a different way?
>

I think it is possible but I don’t think it could happen if my neurones
were replaced by a functionally equivalent component. If it’s functionally
equivalent, my behaviour would be unchanged, so I would have to communicate
that my consciousness had not changed. If, in fact, my consciousness had
changed, this means either I would not have noticed, in which case the idea
of consciousness loses meaning, or I would have noticed but been unable to
communicate it, from which point on my consciousness and my behaviour would
become decoupled, implying a type of substance dualism.

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-20 Thread Stathis Papaioannou
On Wed, 21 Mar 2018 at 6:34 am, Brent Meeker <meeke...@verizon.net> wrote:

>
>
> On 3/20/2018 3:58 AM, Telmo Menezes wrote:
>
> The interesting thing is that you can draw conclusions about consciousness
> without being able to define it or detect it.
>
> I agree.
>
>
> The claim is that IF an entity
> is conscious THEN its consciousness will be preserved if brain function is
> preserved despite changing the brain substrate.
>
> Ok, this is computationalism. I also bet on computationalism, but I
> think we must proceed with caution and not forget that we are just
> assuming this to be true. Your thought experiment is convincing but is
> not a proof. You do expose something that I agree with: that
> non-computationalism sounds silly.
>
> But does it sound so silly if we propose substituting a completely
> different kind of computer, e.g. von Neumann architecture or one that just
> records everything instead of an episodic associative memory, for the
> brain.  The Church-Turing conjecture says it can compute the same
> functions.  But does it instantiate the same consciousness.  My intuition
> is that it would be "conscious" but in some different way; for example by
> having the kind of memory you would have if you could review of a movie of
> any interval in your past.
>

I think it would be conscious in the same way if you replaced neural tissue
with a black box that interacted with the surrounding tissue in the same
way. It doesn’t matter what is in the black box; it could even work by
magic.

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-20 Thread Stathis Papaioannou
On Tue, 20 Mar 2018 at 7:57 pm, Telmo Menezes <te...@telmomenezes.com>
wrote:

> On Tue, Mar 20, 2018 at 1:03 AM, Bruce Kellett
> <bhkell...@optusnet.com.au> wrote:
> > From: Telmo Menezes <te...@telmomenezes.com>
> >
> >
> > On Tue, Mar 20, 2018 at 12:06 AM, Bruce Kellett
> > <bhkell...@optusnet.com.au> wrote:
> >> From: Stathis Papaioannou <stath...@gmail.com>
> >>
> >>
> >> It is possible that consciousness is fully preserved until a threshold
> is
> >> reached then suddenly disappears. So if half the subject’s brain is
> >> replaced, he behaves normally and has normal consciousness, but if one
> >> more
> >> neurone is replaced he continues to behave normally but becomes a
> zombie.
> >> Moreover, since neurones are themselves complex systems it could be
> broken
> >> down further: half of that final neurone could be replaced with no
> change
> >> to
> >> consciousness, but when a particular membrane protein is replaced with a
> >> non-biological nanomachine the subject will suddenly become a zombie.
> And
> >> we
> >> need not stop here, because this protein molecule could also be replaced
> >> gradually, for example by non-biological radioisotopes. If half the
> atoms
> >> in
> >> this protein are replaced, there is no change in behaviour and no change
> >> in
> >> consciousness; but when one more atom is replaced a threshold is reached
> >> and
> >> the subject suddenly loses consciousness. So zombification could turn on
> >> the
> >> addition or subtraction of one neutron. Are you prepared to go this far
> to
> >> challenge the idea that if the observable behaviour of the brain is
> >> replicated, consciousness will also be replicated?
> >>
> >>
> >> If the theory is that if the observable behaviour of the brain is
> >> replicated, then consciousness will also be replicated, then the clear
> >> corollary is that consciousness can be inferred from observable
> behaviour.
> >
> > For this to be a theory in the scientific sense, one needs some way to
> > detect consciousness. In that case your corollary becomes a tautology:
> >
> > (a) If one can detect consciousness then one can detect consciousness.
> >
> > The other option is to assume that observable behaviors in the brain
> > imply consciousness -- because "common sense", because experts say so,
> > whatever. In this case it becomes circular reasoning:
> >
> > (b) Assuming that observable behaviors in the brain imply
> > consciousness, consciousness can be inferred from brain behaviors.
> >
> >
> > I was responding to the claim by Stathis that consciousness will follow
> > replication of observable behaviour. It seemed to me that this was
> proposed
> > as a theory: "If the observable behaviour of is replicated then
> > consciousness will also be replicated."
>
> Lawrence is proposing that something specific about the brain might be
> necessary for consciousness to arise. He proposed a scenario where
> parts of the brain are replaced with a computer, and behavior is
> maintained while consciousness is lost (p-zombie). Stathis is
> proposing a thought experiment that attempts reductio ad absurdum on
> this scenario. Although this is all interesting speculation, there is
> no scientific theory, because there is no way to perform an
> experiment, because there is no scientific instrument that detects
> consciousness. In the end I still don't know, as scientific fact, if
> others are conscious.


The interesting thing is that you can draw conclusions about consciousness
without being able to define it or detect it. The claim is that IF an
entity is conscious THEN its consciousness will be preserved if brain
function is preserved despite changing the brain substrate.

You were the first to call it a theory, and this is why I reacted.
>
> > I was merely pointing out
> > consequences of this theory, so your claims of tautology and/or
> circularity
> > rather miss the point: the consequences of any theory are either
> tautologies
> > or circularities in that sense, because they are implications of the
> theory.
>
> Tautologies are fine indeed. I did not call (a) a tautology as an
> insult, merely to point out that the hard part is still missing, and
> that assuming that it is solved does not lead to anywhere interesting.
>
> Circularities are, of course, not fine. You cannot assume that you can
> infer consciousness from behavior, and that use this assumption to
> conclud

Re: How to live forever

2018-03-19 Thread Stathis Papaioannou
On Tue, 20 Mar 2018 at 10:09 am, Bruce Kellett <bhkell...@optusnet.com.au>
wrote:

> From: Stathis Papaioannou <stath...@gmail.com>
>
>
> It is possible that consciousness is fully preserved until a threshold is
> reached then suddenly disappears. So if half the subject’s brain is
> replaced, he behaves normally and has normal consciousness, but if one more
> neurone is replaced he continues to behave normally but becomes a zombie.
> Moreover, since neurones are themselves complex systems it could be broken
> down further: half of that final neurone could be replaced with no change
> to consciousness, but when a particular membrane protein is replaced with a
> non-biological nanomachine the subject will suddenly become a zombie. And
> we need not stop here, because this protein molecule could also be replaced
> gradually, for example by non-biological radioisotopes. If half the atoms
> in this protein are replaced, there is no change in behaviour and no change
> in consciousness; but when one more atom is replaced a threshold is reached
> and the subject suddenly loses consciousness. So zombification could turn
> on the addition or subtraction of one neutron. Are you prepared to go this
> far to challenge the idea that if the observable behaviour of the brain is
> replicated, consciousness will also be replicated?
>
>
> If the theory is that if the observable behaviour of the brain is
> replicated, then consciousness will also be replicated, then the clear
> corollary is that consciousness can be inferred from observable behaviour.
> Which implies that I can be as certain of the consciousness of other people
> as I am of my own. This seems to do some violence to the 1p/1pp/3p
> distinctions that computationalism rely on so much: only 1p is "certainly
> certain". But if I can reliably infer consciousness in others, then other
> things can be as certain as 1p experiences
>

You can’t reliable infer consciousness in others. What you can infer is
that whatever consciousness an entity has, it will be preserved if
functionally identical substitutions in its brain are made. You can’t know
if a mouse is conscious, but you can know that if mouse neurones are
replaced with functionally identical electronic neurones its behaviour will
be the same and any consciousness it may have will also be the same.

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-19 Thread Stathis Papaioannou
On Tue, 20 Mar 2018 at 3:49 am, Lawrence Crowell <
goldenfieldquaterni...@gmail.com> wrote:

> On Monday, March 19, 2018 at 6:47:01 AM UTC-5, stathisp wrote:
>
>>
>> On Mon, 19 Mar 2018 at 8:58 pm, Lawrence Crowell <
>> goldenfield...@gmail.com> wrote:
>>
>>> On Sunday, March 18, 2018 at 8:46:26 PM UTC-6, stathisp wrote:
>>>
>>>>
>>>>
>>>> On 19 March 2018 at 12:14, Lawrence Crowell <goldenfield...@gmail.com>
>>>> wrote:
>>>>
>>>>> On Sunday, March 18, 2018 at 3:51:13 PM UTC-6, John Clark wrote:
>>>>>>
>>>>>> On Sun, Mar 18, 2018 at 11:02 AM, Lawrence Crowell <
>>>>>> goldenfield...@gmail.com> wrote:
>>>>>>
>>>>>> *> The MH spacetimes have Cauchy horizons that because they pile up
>>>>>>> geodesics can be a sort of singularity.*
>>>>>>
>>>>>>
>>>>>> That’s not the only thing they have, MH spacetimes also have closed
>>>>>> timelike curves and logical paradoxes produced by them, one of them being
>>>>>> the one found by Turing. They also have naked singularities that nobody 
>>>>>> has
>>>>>> ever seen the slightest hint of. And if you need to go to as exotic a 
>>>>>> place
>>>>>> as the speculative interior of a Black Hole to find a reason why Cryonics
>>>>>> might not work I am greatly encouraged.
>>>>>>
>>>>>
>>>>> Not all MH spaces have closed timelike curves.
>>>>>
>>>>>
>>>>>>
>>>>>> *> The subject of NP-completeness came up because of my conjecture
>>>>>>> about there being a sort of code associated with a conscious entity 
>>>>>>> that is
>>>>>>> not computable or if computable is intractable in NP. *
>>>>>>
>>>>>>
>>>>>> NP-completeness is sorta weird and consciousness is sorta weird, but
>>>>>> other than that is there any reason to think the two things are related?
>>>>>>
>>>>>
>>>>> This seems to be something you are not registering. Classic
>>>>> NP-complete problems involve cataloging subgraphs and determining the 
>>>>> rules
>>>>> for all subgraphs in a graph. There are other similar combinatoric 
>>>>> problems
>>>>> that are NP complete. A map from a brain to a computer is going to require
>>>>> knowing how to handle these problems. Quantum computers do not help much.
>>>>>
>>>>>
>>>>>>
>>>>>> *> It could have some bearing on the ability to emulate consciousness
>>>>>>> in a computer.*
>>>>>>
>>>>>>
>>>>>> How do you figure that? Both my brain and my computer are made of
>>>>>> matter that obeys the laws of physics, and matter that obeys the laws of
>>>>>> physics has never been observed to compute NP-complete problems in
>>>>>> polynomial time, much less less find the answer to a non-computable
>>>>>> question, like “what is the 7918th Busy Beaver number?”.
>>>>>>
>>>>>
>>>>> And for this reason it could be impossible to map brain states into a
>>>>> computer and capture a person completely. Of course brains and computers
>>>>> are made of matter. So is a pile of shit also made of matter. Based on 
>>>>> what
>>>>> we know about bacteria and their network communicating by electrical
>>>>> potentials the pile of shit may have more in the way of consciousness than
>>>>> a computer.
>>>>>
>>>>> As for the rest I think a lot of this sort of idea is chasing after
>>>>> some crazy dream. There is in some ways a problem with doing that. As
>>>>> things stand now I would not do the upload.  Below is a picture of some
>>>>> aspect of this.
>>>>>
>>>>>
>>>>> <https://lh3.googleusercontent.com/-B42zD6RjTlo/Wq8Or4mWXiI/DSs/rSPOyS5rTfwkhdWkws8ll7Huj6DVNHMqgCLcBGAs/s1600/Why%2Bis%2Bthe%2Bdog%2Bhappier.png>
>>>>>
>>>> Could you say if you think the observable behaviour of the brain (and
>>>> hence of the person whose muscles are controlled by the brain) could

Re: How to live forever

2018-03-19 Thread Stathis Papaioannou
On Mon, 19 Mar 2018 at 8:58 pm, Lawrence Crowell <
goldenfieldquaterni...@gmail.com> wrote:

> On Sunday, March 18, 2018 at 8:46:26 PM UTC-6, stathisp wrote:
>
>>
>>
>> On 19 March 2018 at 12:14, Lawrence Crowell <goldenfield...@gmail.com>
>> wrote:
>>
>>> On Sunday, March 18, 2018 at 3:51:13 PM UTC-6, John Clark wrote:
>>>>
>>>> On Sun, Mar 18, 2018 at 11:02 AM, Lawrence Crowell <
>>>> goldenfield...@gmail.com> wrote:
>>>>
>>>> *> The MH spacetimes have Cauchy horizons that because they pile up
>>>>> geodesics can be a sort of singularity.*
>>>>
>>>>
>>>> That’s not the only thing they have, MH spacetimes also have closed
>>>> timelike curves and logical paradoxes produced by them, one of them being
>>>> the one found by Turing. They also have naked singularities that nobody has
>>>> ever seen the slightest hint of. And if you need to go to as exotic a place
>>>> as the speculative interior of a Black Hole to find a reason why Cryonics
>>>> might not work I am greatly encouraged.
>>>>
>>>
>>> Not all MH spaces have closed timelike curves.
>>>
>>>
>>>>
>>>> *> The subject of NP-completeness came up because of my conjecture
>>>>> about there being a sort of code associated with a conscious entity that 
>>>>> is
>>>>> not computable or if computable is intractable in NP. *
>>>>
>>>>
>>>> NP-completeness is sorta weird and consciousness is sorta weird, but
>>>> other than that is there any reason to think the two things are related?
>>>>
>>>
>>> This seems to be something you are not registering. Classic NP-complete
>>> problems involve cataloging subgraphs and determining the rules for all
>>> subgraphs in a graph. There are other similar combinatoric problems that
>>> are NP complete. A map from a brain to a computer is going to require
>>> knowing how to handle these problems. Quantum computers do not help much.
>>>
>>>
>>>>
>>>> *> It could have some bearing on the ability to emulate consciousness
>>>>> in a computer.*
>>>>
>>>>
>>>> How do you figure that? Both my brain and my computer are made of
>>>> matter that obeys the laws of physics, and matter that obeys the laws of
>>>> physics has never been observed to compute NP-complete problems in
>>>> polynomial time, much less less find the answer to a non-computable
>>>> question, like “what is the 7918th Busy Beaver number?”.
>>>>
>>>
>>> And for this reason it could be impossible to map brain states into a
>>> computer and capture a person completely. Of course brains and computers
>>> are made of matter. So is a pile of shit also made of matter. Based on what
>>> we know about bacteria and their network communicating by electrical
>>> potentials the pile of shit may have more in the way of consciousness than
>>> a computer.
>>>
>>> As for the rest I think a lot of this sort of idea is chasing after some
>>> crazy dream. There is in some ways a problem with doing that. As things
>>> stand now I would not do the upload.  Below is a picture of some aspect of
>>> this.
>>>
>>>
>>> <https://lh3.googleusercontent.com/-B42zD6RjTlo/Wq8Or4mWXiI/DSs/rSPOyS5rTfwkhdWkws8ll7Huj6DVNHMqgCLcBGAs/s1600/Why%2Bis%2Bthe%2Bdog%2Bhappier.png>
>>>
>> Could you say if you think the observable behaviour of the brain (and
>> hence of the person whose muscles are controlled by the brain) could be
>> replaced by a computer, and, if the answer is yes, if you still think it is
>> possible that the consciousness might not be preserved? And if the answer
>> is also yes to the second question, what you think it would be like if your
>> consciousness was changed by replacing part of your brain, but your brain
>> still forced your body to behave in the same way?
>>
>>
>> --
>> Stathis Papaioannou
>>
>
> I really do not know. I will say if it is possible in principle to replace
> the executive parts of the brain with a computer, but where the result
> could be a sort of zombie. There are too many unknowns and unknowns with no
> Bayesian priors, or unknown unknowns. We are in a domain of possibles,
> plausibles and maybe a Jupiter computer-brain. There is so little to go
> with this, and to be honest a lot more possible ob

Re: How to live forever

2018-03-18 Thread Stathis Papaioannou
On 19 March 2018 at 12:14, Lawrence Crowell <
goldenfieldquaterni...@gmail.com> wrote:

> On Sunday, March 18, 2018 at 3:51:13 PM UTC-6, John Clark wrote:
>>
>> On Sun, Mar 18, 2018 at 11:02 AM, Lawrence Crowell <
>> goldenfield...@gmail.com> wrote:
>>
>> *> The MH spacetimes have Cauchy horizons that because they pile up
>>> geodesics can be a sort of singularity.*
>>
>>
>> That’s not the only thing they have, MH spacetimes also have closed
>> timelike curves and logical paradoxes produced by them, one of them being
>> the one found by Turing. They also have naked singularities that nobody has
>> ever seen the slightest hint of. And if you need to go to as exotic a place
>> as the speculative interior of a Black Hole to find a reason why Cryonics
>> might not work I am greatly encouraged.
>>
>
> Not all MH spaces have closed timelike curves.
>
>
>>
>> *> The subject of NP-completeness came up because of my conjecture about
>>> there being a sort of code associated with a conscious entity that is not
>>> computable or if computable is intractable in NP. *
>>
>>
>> NP-completeness is sorta weird and consciousness is sorta weird, but
>> other than that is there any reason to think the two things are related?
>>
>
> This seems to be something you are not registering. Classic NP-complete
> problems involve cataloging subgraphs and determining the rules for all
> subgraphs in a graph. There are other similar combinatoric problems that
> are NP complete. A map from a brain to a computer is going to require
> knowing how to handle these problems. Quantum computers do not help much.
>
>
>>
>> *> It could have some bearing on the ability to emulate consciousness in
>>> a computer.*
>>
>>
>> How do you figure that? Both my brain and my computer are made of matter
>> that obeys the laws of physics, and matter that obeys the laws of physics
>> has never been observed to compute NP-complete problems in polynomial time,
>> much less less find the answer to a non-computable question, like “what is
>> the 7918th Busy Beaver number?”.
>>
>
> And for this reason it could be impossible to map brain states into a
> computer and capture a person completely. Of course brains and computers
> are made of matter. So is a pile of shit also made of matter. Based on what
> we know about bacteria and their network communicating by electrical
> potentials the pile of shit may have more in the way of consciousness than
> a computer.
>
> As for the rest I think a lot of this sort of idea is chasing after some
> crazy dream. There is in some ways a problem with doing that. As things
> stand now I would not do the upload.  Below is a picture of some aspect of
> this.
>
>
> <https://lh3.googleusercontent.com/-B42zD6RjTlo/Wq8Or4mWXiI/DSs/rSPOyS5rTfwkhdWkws8ll7Huj6DVNHMqgCLcBGAs/s1600/Why%2Bis%2Bthe%2Bdog%2Bhappier.png>
>
Could you say if you think the observable behaviour of the brain (and hence
of the person whose muscles are controlled by the brain) could be replaced
by a computer, and, if the answer is yes, if you still think it is possible
that the consciousness might not be preserved? And if the answer is also
yes to the second question, what you think it would be like if your
consciousness was changed by replacing part of your brain, but your brain
still forced your body to behave in the same way?


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-17 Thread Stathis Papaioannou
On Sat, 17 Mar 2018 at 12:36 pm, Lawrence Crowell <
goldenfieldquaterni...@gmail.com> wrote:

>
>
> On Friday, March 16, 2018 at 6:14:22 PM UTC-6, stathisp wrote:
>
>>
>>
>> On 16 March 2018 at 22:57, Lawrence Crowell <goldenfield...@gmail.com>
>> wrote:
>>
>>> On Thursday, March 15, 2018 at 8:34:34 PM UTC-6, stathisp wrote:
>>>>
>>>>
>>>> On Thu, 15 Mar 2018 at 10:36 pm, Lawrence Crowell <
>>>> goldenfield...@gmail.com> wrote:
>>>>
>>>>> The Aaronson discussion about soap bubbles and optimization is in line
>>>>> with something I have maintained. Eternal black holes with the inner
>>>>> horizon r_- continuous with I^+ means in principle a Turing machine
>>>>> approaching r_- could receive an infinite stream of bits or qubits so it
>>>>> could make a catalog of all Turing machines that halt and do not halt.
>>>>> Quantum mechanics enters into the physics, such as Hawking radiation, that
>>>>> separates  r_- from I^+. However, this may adjust the Chaitan halting
>>>>> probability. With NP-complete problems this would translate into the
>>>>> existence of systems that approximate such solutions.
>>>>>
>>>>> I suspect the individual consciousness of a person or even animals is
>>>>> wrapped up in some sort of code, that while it might be derived in some
>>>>> approximate way it is tough to find from outside. The thesis that all of
>>>>> consciousness is a manifestation of calculation presumes the brain is
>>>>> primarily involved with computation. The problem is that the brain 
>>>>> computes
>>>>> little in the way of mathematical solutions, but rather is involved with
>>>>> maintenance of homeostasis of an organism. Further, consciousness is less
>>>>> about solving problems than it is about maintaining a self-referenced
>>>>> narrative that is a positive feedback and forms a meaning cycle.
>>>>>
>>>>
>>>> The sequence of reasoning is not that the brain does computation, and
>>>> that therefore consciousness is computation. It is that the brain
>>>> apparently gives rise to consciousness, and if brain components can be
>>>> replaced by a computer, then consciousness should be preserved, otherwise
>>>> the implausible situation would occur where consciousness gradually fades
>>>> or suddenly disappears during the replacement process despite no change in
>>>> behaviour. Against this is the possibility that some component of the brain
>>>> utilises non-computable physics, so the replacement would fail; but there
>>>> is no evidence for this, and it seems to me the main reason such theories
>>>> are entertained at all is a disdain for the idea that human beings are just
>>>> ordinary matter.
>>>>
>>>
>>> The point is not that neurological processes can't be modeled using
>>> biophysical algorithms. Below is a neural circuit diagram that illustrates
>>> a feedback structure. These neurons could be replaced by flip flop systems
>>> and other electronic. In that way this system could be modeled. My main
>>> point is there is a distinction between the territory and the map. Feynman
>>> also made the quip that simulation is like masturbation; it is fine until
>>> you start thinking it is the real thing.
>>>
>>> LC
>>>
>>>
>>> <https://lh3.googleusercontent.com/-UI-xEX4ZlC4/WquuXqqaX7I/DQ4/oYYYNdMTQvIDc4isEF3myVIliqK2Mm5lACLcBGAs/s1600/thalamocortical%2Bcircuit.gif>
>>>
>>>
>> You're suggesting that consciousness could be separated from the
>> associated behaviour. That would be very strange. It would mean that you
>> could replace part of a person's brain with an electronic system and the
>> person would behave exactly the same, but their consciousness would be
>> different. If their consciousness is different, they should be able to
>> notice this and communicate it, at least if the difference is large enough.
>> But if the neural replacement is functionally equivalent, they will not be
>> able to communicate it, because their brain will continue sending signals
>> to the muscles responsible for communication as if nothing had changed. So
>> either the subject would be unable to notice a large change in
>> consciousness, or they would notice it but, against their wishes, speech
>> would continue coming

Re: How to live forever

2018-03-16 Thread Stathis Papaioannou
On 16 March 2018 at 22:57, Lawrence Crowell <
goldenfieldquaterni...@gmail.com> wrote:

> On Thursday, March 15, 2018 at 8:34:34 PM UTC-6, stathisp wrote:
>>
>>
>> On Thu, 15 Mar 2018 at 10:36 pm, Lawrence Crowell <
>> goldenfield...@gmail.com> wrote:
>>
>>> The Aaronson discussion about soap bubbles and optimization is in line
>>> with something I have maintained. Eternal black holes with the inner
>>> horizon r_- continuous with I^+ means in principle a Turing machine
>>> approaching r_- could receive an infinite stream of bits or qubits so it
>>> could make a catalog of all Turing machines that halt and do not halt.
>>> Quantum mechanics enters into the physics, such as Hawking radiation, that
>>> separates  r_- from I^+. However, this may adjust the Chaitan halting
>>> probability. With NP-complete problems this would translate into the
>>> existence of systems that approximate such solutions.
>>>
>>> I suspect the individual consciousness of a person or even animals is
>>> wrapped up in some sort of code, that while it might be derived in some
>>> approximate way it is tough to find from outside. The thesis that all of
>>> consciousness is a manifestation of calculation presumes the brain is
>>> primarily involved with computation. The problem is that the brain computes
>>> little in the way of mathematical solutions, but rather is involved with
>>> maintenance of homeostasis of an organism. Further, consciousness is less
>>> about solving problems than it is about maintaining a self-referenced
>>> narrative that is a positive feedback and forms a meaning cycle.
>>>
>>
>> The sequence of reasoning is not that the brain does computation, and
>> that therefore consciousness is computation. It is that the brain
>> apparently gives rise to consciousness, and if brain components can be
>> replaced by a computer, then consciousness should be preserved, otherwise
>> the implausible situation would occur where consciousness gradually fades
>> or suddenly disappears during the replacement process despite no change in
>> behaviour. Against this is the possibility that some component of the brain
>> utilises non-computable physics, so the replacement would fail; but there
>> is no evidence for this, and it seems to me the main reason such theories
>> are entertained at all is a disdain for the idea that human beings are just
>> ordinary matter.
>>
>
> The point is not that neurological processes can't be modeled using
> biophysical algorithms. Below is a neural circuit diagram that illustrates
> a feedback structure. These neurons could be replaced by flip flop systems
> and other electronic. In that way this system could be modeled. My main
> point is there is a distinction between the territory and the map. Feynman
> also made the quip that simulation is like masturbation; it is fine until
> you start thinking it is the real thing.
>
> LC
>
>
> <https://lh3.googleusercontent.com/-UI-xEX4ZlC4/WquuXqqaX7I/DQ4/oYYYNdMTQvIDc4isEF3myVIliqK2Mm5lACLcBGAs/s1600/thalamocortical%2Bcircuit.gif>
>
>
You're suggesting that consciousness could be separated from the associated
behaviour. That would be very strange. It would mean that you could replace
part of a person's brain with an electronic system and the person would
behave exactly the same, but their consciousness would be different. If
their consciousness is different, they should be able to notice this and
communicate it, at least if the difference is large enough. But if the
neural replacement is functionally equivalent, they will not be able to
communicate it, because their brain will continue sending signals to the
muscles responsible for communication as if nothing had changed. So either
the subject would be unable to notice a large change in consciousness, or
they would notice it but, against their wishes, speech would continue
coming out of their mouth indicating that everything was just the same.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-15 Thread Stathis Papaioannou
 it would quickly get weeded out of the gene pool." *
>>
>> By the way I highly recommend Aaronson's book "Quantum Computing since
>> Democritus".
>>
>>> *> I can see some plausible prospect of removing a brain or CNS from a
>>> body and putting that in another body.*
>>
>> Or connecting your brain to a virtual body, in fact that could have
>> already happened to you for all you know. And if you’re not already a brain
>> in a vat you’re certainly a brain in a box made of bone.
>>
>>> *> Even there I suspect the experience might be terribly disorienting,
>>> as bodies have a sort of "body brain," which involve a dog's brain worth of
>>> neurons, and one would not just have a new body so much as you would
>>> neurologically negotiate with the new body for a while.*
>>
>> So did Stephen Hawking die today or did he die in 1973 when he started to
>> lose control of his body? I am not a world class athlete so if I woke up
>> and found that had changed and now my body had the strength of a sumo
>> wrestler the endurance of a marathon runner and the muscular coordination
>> of a gold medal gymnast I wouldn’t be very upset.
>>
>>> *> I am not sure many of these things will happen. *
>>
>> Do you need to be certain of the outcome before you take any action?
>> Suppose you were on a sinking ship in a hurricane and the radio is out so
>> no SOS has been sent and you’re very far from the nearest land. There is
>> a lifeboat but it's small and the waves are mountainous and the ocean is
>> huge. So, would you get into the lifeboat? As for me I agree with Dylan
>> Thomas and would rather not go gentle into that good night and would prefer
>> to rage against the dying of the light.
>>
>>  John K Clark
>>
>>
>>
>>
>>
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>
-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Interpretive cards (MWI, Bohm, Copenhagen: collect ’em all)

2018-02-11 Thread Stathis Papaioannou
On Mon, 12 Feb 2018 at 9:30 am, Bruce Kellett <bhkell...@optusnet.com.au>
wrote:

> Scott Aaronson has an interesting blog entry on quantum interpretations:
>
> https://www.scottaaronson.com/blog/?p=3628
>
> He seems somewhat conflicted over which interpretation to believe.
> "Anyway, as I said, MWI is the best interpretation if we leave ourselves
> out of the picture. what would it be like to be maintained in a
> coherent superposition of thinking two different thoughts A and B, and
> then to get measured in the |A>+|B>, |A>-|B> basis? Would it even be
> like anything? Or is there something about our consciousness that
> depends on decoherence, irreversibility, full participation in the arrow
> of time, not living in an enclosed little unitary box  something
> that we'd necessarily destroy if we tried to set up a large-scale
> interference experiment on our own brains, or any other conscious
> entities?"
>
> I think the idea that consciousness depends on full participation in the
> arrow of time -- namely, the irreversible formation of memories -- is
> something that need to be taken seriously.


All the interpretations are consistent with the same physical predictions.
If some exclude consciousness, then that makes consciousness somehow
separate from the workings of the physical world; supernatural, in other
words.
-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Inside Black Holes

2018-01-13 Thread Stathis Papaioannou
On Sun, 14 Jan 2018 at 9:44 am, <agrayson2...@gmail.com> wrote:

>
>
> On Saturday, January 13, 2018 at 2:59:00 PM UTC-7, Brent wrote:
>>
>> Classically, the radiation isn't "trapped"; it goes to the singularity
>> (what the QM does? dunno).  The inflowing radiation is just that starlight
>> that falls on the event horizon...which is not particularly bright.
>>
>> Brent
>>
>
> I'm referring to the INTERIOR of the BH. If the radiation is trapped
> inside, the environment is likely hot and bright. What happens to infalling
> matter? Converted to radiation? AG
>

The event horizon is usually considered the border between the black hole
and the rest of the universe.

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Is AI really a threat to mankind?

2017-12-07 Thread Stathis Papaioannou
On Fri, 8 Dec 2017 at 10:19 am, <agrayson2...@gmail.com> wrote:

>
>
> On Thursday, December 7, 2017 at 9:47:42 PM UTC, Brent wrote:
>>
>> When I took a series of classes in Artificial Intelliegence at UCLA in
>> the '70s the professor introducing the material of the first class
>> explained that, "Intelligence is whatever a computer can't doyet."
>>
>> Brent
>>
>
> The fear of AI is that computers could eventually exhibit a characteristic
> reminiscent of "will" and exhibit it maliciously against humans. I suppose
> for you that's not a problem since, IIRC, you deny the existence of will. AG
>

What’s the difference between acting maliciously against humans with and
without will?

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Is AI really a threat to mankind?

2017-12-07 Thread Stathis Papaioannou
On Thu, 7 Dec 2017 at 8:32 pm, Alberto G. Corona <agocor...@gmail.com>
wrote:

> Both: is very very hard to simulate and impossible to achieve,
> The first computer scientists though that making mathematical computations
> was a sign of intelligence. But failed miserably with the next goal, and so
> on.
> program something that humans do. if your program does it, then it becomes
> non intelligent.
>

So by definition no matter how close to intelligent behaviour a machine
comes, it won’t be intelligent?

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Is AI really a threat to mankind?

2017-11-28 Thread Stathis Papaioannou
On Wed, 29 Nov 2017 at 11:52 am, Brent Meeker <meeke...@verizon.net> wrote:

>
>
> On 11/28/2017 6:33 AM, Jason Resch wrote:
> > If you look at everything that motivates all human endeavors, it is
> > ultimately, all about realizing and maximizing good experiences while
> > avoiding and minimizing bad experiences.
>
> Mostly, but not entirely.  People (especially parents) sacrifice for
> others.


Parents sacrifice for their children because it gives them pleasure to see
them doing well and distresses them to see them suffering.

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Consistency of Postulates of QM

2017-11-27 Thread Stathis Papaioannou
On Tue, 28 Nov 2017 at 7:41 am, <agrayson2...@gmail.com> wrote:

>
>
> On Monday, November 27, 2017 at 3:28:20 PM UTC, stathisp wrote:
>
>>
>> On Mon, 27 Nov 2017 at 6:23 pm, <agrays...@gmail.com> wrote:
>>
>>>
>>>
>>> On Monday, November 27, 2017 at 7:12:09 AM UTC, stathisp wrote:
>>>
>>>>
>>>>
>>>> On 27 November 2017 at 17:54, <agrays...@gmail.com> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Monday, November 27, 2017 at 6:45:43 AM UTC, stathisp wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On 27 November 2017 at 17:36, <agrays...@gmail.com> wrote:
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Monday, November 27, 2017 at 6:30:34 AM UTC, agrays...@gmail.com
>>>>>>> wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Monday, November 27, 2017 at 6:21:30 AM UTC, stathisp wrote:
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On 27 November 2017 at 16:54, <agrays...@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Monday, November 27, 2017 at 5:48:58 AM UTC,
>>>>>>>>>> agrays...@gmail.com wrote:
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Monday, November 27, 2017 at 5:44:25 AM UTC, stathisp wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On 27 November 2017 at 16:25, <agrays...@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Monday, November 27, 2017 at 5:07:03 AM UTC, stathisp wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On 26 November 2017 at 13:33, <agrays...@gmail.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> You keep ignoring the obvious 800 pound gorilla in the room;
>>>>>>>>>>>>>>> introducing Many Worlds creates hugely more complications than 
>>>>>>>>>>>>>>> it purports
>>>>>>>>>>>>>>> to do away with; multiple, indeed infinite observers with the 
>>>>>>>>>>>>>>> same memories
>>>>>>>>>>>>>>> and life histories for example. Give me a break. AG
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> What about a single, infinite world in which everything is
>>>>>>>>>>>>>> duplicated to an arbitrary level of detail, including the Earth 
>>>>>>>>>>>>>> and its
>>>>>>>>>>>>>> inhabitants, an infinite number of times? Is the bizarreness of 
>>>>>>>>>>>>>> this idea
>>>>>>>>>>>>>> an argument for a finite world, ending perhaps at the limit of 
>>>>>>>>>>>>>> what we can
>>>>>>>>>>>>>> see?
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> --stathis Papaioannou
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> FWIW, in my view we live in huge, but finite, expanding
>>>>>>>>>>>>> hypersphere, meaning in any direction, if go far enough, you 
>>>>>>>>>>>>> return to your
>>>>>>>>>>>>> starting position. Many cosmologists say it

Re: Consistency of Postulates of QM

2017-11-27 Thread Stathis Papaioannou
On Mon, 27 Nov 2017 at 6:23 pm, <agrayson2...@gmail.com> wrote:

>
>
> On Monday, November 27, 2017 at 7:12:09 AM UTC, stathisp wrote:
>
>>
>>
>> On 27 November 2017 at 17:54, <agrays...@gmail.com> wrote:
>>
>>>
>>>
>>> On Monday, November 27, 2017 at 6:45:43 AM UTC, stathisp wrote:
>>>
>>>>
>>>>
>>>> On 27 November 2017 at 17:36, <agrays...@gmail.com> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Monday, November 27, 2017 at 6:30:34 AM UTC, agrays...@gmail.com
>>>>> wrote:
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Monday, November 27, 2017 at 6:21:30 AM UTC, stathisp wrote:
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On 27 November 2017 at 16:54, <agrays...@gmail.com> wrote:
>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Monday, November 27, 2017 at 5:48:58 AM UTC, agrays...@gmail.com
>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Monday, November 27, 2017 at 5:44:25 AM UTC, stathisp wrote:
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On 27 November 2017 at 16:25, <agrays...@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Monday, November 27, 2017 at 5:07:03 AM UTC, stathisp wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On 26 November 2017 at 13:33, <agrays...@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> You keep ignoring the obvious 800 pound gorilla in the room;
>>>>>>>>>>>>> introducing Many Worlds creates hugely more complications than it 
>>>>>>>>>>>>> purports
>>>>>>>>>>>>> to do away with; multiple, indeed infinite observers with the 
>>>>>>>>>>>>> same memories
>>>>>>>>>>>>> and life histories for example. Give me a break. AG
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> What about a single, infinite world in which everything is
>>>>>>>>>>>> duplicated to an arbitrary level of detail, including the Earth 
>>>>>>>>>>>> and its
>>>>>>>>>>>> inhabitants, an infinite number of times? Is the bizarreness of 
>>>>>>>>>>>> this idea
>>>>>>>>>>>> an argument for a finite world, ending perhaps at the limit of 
>>>>>>>>>>>> what we can
>>>>>>>>>>>> see?
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> --stathis Papaioannou
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> FWIW, in my view we live in huge, but finite, expanding
>>>>>>>>>>> hypersphere, meaning in any direction, if go far enough, you return 
>>>>>>>>>>> to your
>>>>>>>>>>> starting position. Many cosmologists say it's flat and thus 
>>>>>>>>>>> infinite; not
>>>>>>>>>>> asymptotically flat and therefore spatially finite. Measurements 
>>>>>>>>>>> cannot
>>>>>>>>>>> distinguish the two possibilities. I don't buy the former since 
>>>>>>>>>>> they also
>>>>>>>>>>> concede it is finite in age. A Multiverse might exist, and that 
>>>>>>>>>>> would
>>>>>>>>>>> likely be infinite in space and time, with erupting BB universes, 
>>>>>>>>>>> some like
>>>>>>>>>>> 

Re: Consistency of Postulates of QM

2017-11-27 Thread Stathis Papaioannou
On Mon, 27 Nov 2017 at 6:29 pm, <agrayson2...@gmail.com> wrote:

>
>
> On Monday, November 27, 2017 at 7:23:48 AM UTC, agrays...@gmail.com wrote:
>>
>>
>>
>> On Monday, November 27, 2017 at 7:12:09 AM UTC, stathisp wrote:
>>>
>>>
>>>
>>> On 27 November 2017 at 17:54, <agrays...@gmail.com> wrote:
>>>
>>>>
>>>>
>>>> On Monday, November 27, 2017 at 6:45:43 AM UTC, stathisp wrote:
>>>>
>>>>>
>>>>>
>>>>> On 27 November 2017 at 17:36, <agrays...@gmail.com> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Monday, November 27, 2017 at 6:30:34 AM UTC, agrays...@gmail.com
>>>>>> wrote:
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Monday, November 27, 2017 at 6:21:30 AM UTC, stathisp wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On 27 November 2017 at 16:54, <agrays...@gmail.com> wrote:
>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Monday, November 27, 2017 at 5:48:58 AM UTC,
>>>>>>>>> agrays...@gmail.com wrote:
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Monday, November 27, 2017 at 5:44:25 AM UTC, stathisp wrote:
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On 27 November 2017 at 16:25, <agrays...@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On Monday, November 27, 2017 at 5:07:03 AM UTC, stathisp wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On 26 November 2017 at 13:33, <agrays...@gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>> You keep ignoring the obvious 800 pound gorilla in the room;
>>>>>>>>>>>>>> introducing Many Worlds creates hugely more complications than 
>>>>>>>>>>>>>> it purports
>>>>>>>>>>>>>> to do away with; multiple, indeed infinite observers with the 
>>>>>>>>>>>>>> same memories
>>>>>>>>>>>>>> and life histories for example. Give me a break. AG
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> What about a single, infinite world in which everything is
>>>>>>>>>>>>> duplicated to an arbitrary level of detail, including the Earth 
>>>>>>>>>>>>> and its
>>>>>>>>>>>>> inhabitants, an infinite number of times? Is the bizarreness of 
>>>>>>>>>>>>> this idea
>>>>>>>>>>>>> an argument for a finite world, ending perhaps at the limit of 
>>>>>>>>>>>>> what we can
>>>>>>>>>>>>> see?
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> --stathis Papaioannou
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> FWIW, in my view we live in huge, but finite, expanding
>>>>>>>>>>>> hypersphere, meaning in any direction, if go far enough, you 
>>>>>>>>>>>> return to your
>>>>>>>>>>>> starting position. Many cosmologists say it's flat and thus 
>>>>>>>>>>>> infinite; not
>>>>>>>>>>>> asymptotically flat and therefore spatially finite. Measurements 
>>>>>>>>>>>> cannot
>>>>>>>>>>>> distinguish the two possibilities. I don't buy the former since 
>>&g

Re: Consistency of Postulates of QM

2017-11-26 Thread Stathis Papaioannou
On 27 November 2017 at 17:54, <agrayson2...@gmail.com> wrote:

>
>
> On Monday, November 27, 2017 at 6:45:43 AM UTC, stathisp wrote:
>
>>
>>
>> On 27 November 2017 at 17:36, <agrays...@gmail.com> wrote:
>>
>>>
>>>
>>> On Monday, November 27, 2017 at 6:30:34 AM UTC, agrays...@gmail.com
>>> wrote:
>>>>
>>>>
>>>>
>>>> On Monday, November 27, 2017 at 6:21:30 AM UTC, stathisp wrote:
>>>>>
>>>>>
>>>>>
>>>>> On 27 November 2017 at 16:54, <agrays...@gmail.com> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Monday, November 27, 2017 at 5:48:58 AM UTC, agrays...@gmail.com
>>>>>> wrote:
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Monday, November 27, 2017 at 5:44:25 AM UTC, stathisp wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On 27 November 2017 at 16:25, <agrays...@gmail.com> wrote:
>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Monday, November 27, 2017 at 5:07:03 AM UTC, stathisp wrote:
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On 26 November 2017 at 13:33, <agrays...@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>> You keep ignoring the obvious 800 pound gorilla in the room;
>>>>>>>>>>> introducing Many Worlds creates hugely more complications than it 
>>>>>>>>>>> purports
>>>>>>>>>>> to do away with; multiple, indeed infinite observers with the same 
>>>>>>>>>>> memories
>>>>>>>>>>> and life histories for example. Give me a break. AG
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> What about a single, infinite world in which everything is
>>>>>>>>>> duplicated to an arbitrary level of detail, including the Earth and 
>>>>>>>>>> its
>>>>>>>>>> inhabitants, an infinite number of times? Is the bizarreness of this 
>>>>>>>>>> idea
>>>>>>>>>> an argument for a finite world, ending perhaps at the limit of what 
>>>>>>>>>> we can
>>>>>>>>>> see?
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --stathis Papaioannou
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> FWIW, in my view we live in huge, but finite, expanding
>>>>>>>>> hypersphere, meaning in any direction, if go far enough, you return 
>>>>>>>>> to your
>>>>>>>>> starting position. Many cosmologists say it's flat and thus infinite; 
>>>>>>>>> not
>>>>>>>>> asymptotically flat and therefore spatially finite. Measurements 
>>>>>>>>> cannot
>>>>>>>>> distinguish the two possibilities. I don't buy the former since they 
>>>>>>>>> also
>>>>>>>>> concede it is finite in age. A Multiverse might exist, and that would
>>>>>>>>> likely be infinite in space and time, with erupting BB universes, 
>>>>>>>>> some like
>>>>>>>>> ours, most definitely not. Like I said, FWIW. AG
>>>>>>>>>
>>>>>>>>
>>>>>>>> OK, but is the *strangeness* of a multiverse with multiple copies
>>>>>>>> of everything *in itself* an argument against it?
>>>>>>>>
>>>>>>>> --
>>>>>>>> Stathis Papaioannou
>>>>>>>>
>>>>>>>
>>>>>>> FWIW, I don't buy the claim that an infinite multiverse implies
>>>>>>> infinite copies of everything. Has anyone proved that? AG
>>>>>>>
>>>>>>
>>>>>> If there are uncountable possibilities for different universes, why
>>>>&g

Re: Consistency of Postulates of QM

2017-11-26 Thread Stathis Papaioannou
On 27 November 2017 at 17:36, <agrayson2...@gmail.com> wrote:

>
>
> On Monday, November 27, 2017 at 6:30:34 AM UTC, agrays...@gmail.com wrote:
>>
>>
>>
>> On Monday, November 27, 2017 at 6:21:30 AM UTC, stathisp wrote:
>>>
>>>
>>>
>>> On 27 November 2017 at 16:54, <agrays...@gmail.com> wrote:
>>>
>>>>
>>>>
>>>> On Monday, November 27, 2017 at 5:48:58 AM UTC, agrays...@gmail.com
>>>> wrote:
>>>>>
>>>>>
>>>>>
>>>>> On Monday, November 27, 2017 at 5:44:25 AM UTC, stathisp wrote:
>>>>>>
>>>>>>
>>>>>>
>>>>>> On 27 November 2017 at 16:25, <agrays...@gmail.com> wrote:
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Monday, November 27, 2017 at 5:07:03 AM UTC, stathisp wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On 26 November 2017 at 13:33, <agrays...@gmail.com> wrote:
>>>>>>>>
>>>>>>>> You keep ignoring the obvious 800 pound gorilla in the room;
>>>>>>>>> introducing Many Worlds creates hugely more complications than it 
>>>>>>>>> purports
>>>>>>>>> to do away with; multiple, indeed infinite observers with the same 
>>>>>>>>> memories
>>>>>>>>> and life histories for example. Give me a break. AG
>>>>>>>>>
>>>>>>>>
>>>>>>>> What about a single, infinite world in which everything is
>>>>>>>> duplicated to an arbitrary level of detail, including the Earth and its
>>>>>>>> inhabitants, an infinite number of times? Is the bizarreness of this 
>>>>>>>> idea
>>>>>>>> an argument for a finite world, ending perhaps at the limit of what we 
>>>>>>>> can
>>>>>>>> see?
>>>>>>>>
>>>>>>>>
>>>>>>>> --stathis Papaioannou
>>>>>>>>
>>>>>>>
>>>>>>> FWIW, in my view we live in huge, but finite, expanding hypersphere,
>>>>>>> meaning in any direction, if go far enough, you return to your starting
>>>>>>> position. Many cosmologists say it's flat and thus infinite; not
>>>>>>> asymptotically flat and therefore spatially finite. Measurements cannot
>>>>>>> distinguish the two possibilities. I don't buy the former since they 
>>>>>>> also
>>>>>>> concede it is finite in age. A Multiverse might exist, and that would
>>>>>>> likely be infinite in space and time, with erupting BB universes, some 
>>>>>>> like
>>>>>>> ours, most definitely not. Like I said, FWIW. AG
>>>>>>>
>>>>>>
>>>>>> OK, but is the *strangeness* of a multiverse with multiple copies of
>>>>>> everything *in itself* an argument against it?
>>>>>>
>>>>>> --
>>>>>> Stathis Papaioannou
>>>>>>
>>>>>
>>>>> FWIW, I don't buy the claim that an infinite multiverse implies
>>>>> infinite copies of everything. Has anyone proved that? AG
>>>>>
>>>>
>>>> If there are uncountable possibilities for different universes, why
>>>> should there be any repetitions? I don't think infinite repetitions has
>>>> been proven, and I don't believe it. AG
>>>>
>>>>
>>
>>> If a finite subset of the universe has only a finite number of
>>> configurations and the Cosmological Principle is correct, then every finite
>>> subset should repeat. It might not; for example, from a radius of 10^100 m
>>> out it might be just be vacuum forever, or Donald Trump dolls.
>>> --
>>> Stathis Papaioannou
>>>
>>
>> Our universe might be finite, but the parameter variations of possible
>> universes might be uncountable. If so, there's no reason to think the
>> parameters characterizing our universe will come again in a random process.
>> AG
>>
>
> Think of it this way; if our universe is represented by some number on the
> real line, and you throw darts randomly at something isomorphic to the real
> line, what's the chance of the dart landing on the number representing our
> universe?. ANSWER: ZERO. AG
>

But the structures we may be interested in are finite. I feel that I am the
same person from moment to moment despite multiple changes in my body that
are grossly observable, so changes in the millionth decimal place of some
parameter won't bother me. The dart has to land on a blob, not on a real
number.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Consistency of Postulates of QM

2017-11-26 Thread Stathis Papaioannou
On 27 November 2017 at 16:54, <agrayson2...@gmail.com> wrote:

>
>
> On Monday, November 27, 2017 at 5:48:58 AM UTC, agrays...@gmail.com wrote:
>>
>>
>>
>> On Monday, November 27, 2017 at 5:44:25 AM UTC, stathisp wrote:
>>>
>>>
>>>
>>> On 27 November 2017 at 16:25, <agrays...@gmail.com> wrote:
>>>
>>>>
>>>>
>>>> On Monday, November 27, 2017 at 5:07:03 AM UTC, stathisp wrote:
>>>>>
>>>>>
>>>>>
>>>>> On 26 November 2017 at 13:33, <agrays...@gmail.com> wrote:
>>>>>
>>>>> You keep ignoring the obvious 800 pound gorilla in the room;
>>>>>> introducing Many Worlds creates hugely more complications than it 
>>>>>> purports
>>>>>> to do away with; multiple, indeed infinite observers with the same 
>>>>>> memories
>>>>>> and life histories for example. Give me a break. AG
>>>>>>
>>>>>
>>>>> What about a single, infinite world in which everything is duplicated
>>>>> to an arbitrary level of detail, including the Earth and its inhabitants,
>>>>> an infinite number of times? Is the bizarreness of this idea an argument
>>>>> for a finite world, ending perhaps at the limit of what we can see?
>>>>>
>>>>>
>>>>> --stathis Papaioannou
>>>>>
>>>>
>>>> FWIW, in my view we live in huge, but finite, expanding hypersphere,
>>>> meaning in any direction, if go far enough, you return to your starting
>>>> position. Many cosmologists say it's flat and thus infinite; not
>>>> asymptotically flat and therefore spatially finite. Measurements cannot
>>>> distinguish the two possibilities. I don't buy the former since they also
>>>> concede it is finite in age. A Multiverse might exist, and that would
>>>> likely be infinite in space and time, with erupting BB universes, some like
>>>> ours, most definitely not. Like I said, FWIW. AG
>>>>
>>>
>>> OK, but is the *strangeness* of a multiverse with multiple copies of
>>> everything *in itself* an argument against it?
>>>
>>> --
>>> Stathis Papaioannou
>>>
>>
>> FWIW, I don't buy the claim that an infinite multiverse implies infinite
>> copies of everything. Has anyone proved that? AG
>>
>
> If there are uncountable possibilities for different universes, why should
> there be any repetitions? I don't think infinite repetitions has been
> proven, and I don't believe it. AG
>
> If a finite subset of the universe has only a finite number of
configurations and the Cosmological Principle is correct, then every finite
subset should repeat. It might not; for example, from a radius of 10^100 m
out it might be just be vacuum forever, or Donald Trump dolls.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Consistency of Postulates of QM

2017-11-26 Thread Stathis Papaioannou
On 27 November 2017 at 17:04, Bruce Kellett <bhkell...@optusnet.com.au>
wrote:

> On 27/11/2017 4:39 pm, Stathis Papaioannou wrote:
>
> On 27 November 2017 at 16:19, Bruce Kellett <bhkell...@optusnet.com.au>
> wrote:
>
>> On 27/11/2017 4:06 pm, Stathis Papaioannou wrote:
>>
>> On 26 November 2017 at 13:33, <agrayson2...@gmail.com> wrote:
>>
>> You keep ignoring the obvious 800 pound gorilla in the room; introducing
>>> Many Worlds creates hugely more complications than it purports to do away
>>> with; multiple, indeed infinite observers with the same memories and life
>>> histories for example. Give me a break. AG
>>>
>>
>> What about a single, infinite world in which everything is duplicated to
>> an arbitrary level of detail, including the Earth and its inhabitants, an
>> infinite number of times? Is the bizarreness of this idea an argument for a
>> finite world, ending perhaps at the limit of what we can see?
>>
>>
>> That conclusion for the Level I multiverse depends on a particular
>> assumption about the initial probability distribution. Can you justify that
>> assumption?
>>
>
> The assumption is the Cosmological Principle, that the part of the
> universe that we can see is typical of the rest of the universe. Maybe it's
> false; but my question is, is the strangeness of a Level I multiverse an
> *argument* for its falseness?
>
>
> Just because you can't prove that a hypothesis is false does not imply
> that it is true. Can you prove that the Cosmological Principle is
> infinitely extendible? I suggest that it is most probably false, since
> there is no reason for the initial conditions to be sufficiently uniform
> for it to be extrapolated indefinitely.
>

Maybe, but I'm still wondering whether the *strangeness* of finite
structures such as humans being duplicated is an argument against it, since
it does seem to be most people's first objection to MWI.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Consistency of Postulates of QM

2017-11-26 Thread Stathis Papaioannou
On 27 November 2017 at 16:25, <agrayson2...@gmail.com> wrote:

>
>
> On Monday, November 27, 2017 at 5:07:03 AM UTC, stathisp wrote:
>>
>>
>>
>> On 26 November 2017 at 13:33, <agrays...@gmail.com> wrote:
>>
>> You keep ignoring the obvious 800 pound gorilla in the room; introducing
>>> Many Worlds creates hugely more complications than it purports to do away
>>> with; multiple, indeed infinite observers with the same memories and life
>>> histories for example. Give me a break. AG
>>>
>>
>> What about a single, infinite world in which everything is duplicated to
>> an arbitrary level of detail, including the Earth and its inhabitants, an
>> infinite number of times? Is the bizarreness of this idea an argument for a
>> finite world, ending perhaps at the limit of what we can see?
>>
>>
>> --stathis Papaioannou
>>
>
> FWIW, in my view we live in huge, but finite, expanding hypersphere,
> meaning in any direction, if go far enough, you return to your starting
> position. Many cosmologists say it's flat and thus infinite; not
> asymptotically flat and therefore spatially finite. Measurements cannot
> distinguish the two possibilities. I don't buy the former since they also
> concede it is finite in age. A Multiverse might exist, and that would
> likely be infinite in space and time, with erupting BB universes, some like
> ours, most definitely not. Like I said, FWIW. AG
>

OK, but is the *strangeness* of a multiverse with multiple copies of
everything *in itself* an argument against it?

-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Consistency of Postulates of QM

2017-11-26 Thread Stathis Papaioannou
On 27 November 2017 at 16:19, Bruce Kellett <bhkell...@optusnet.com.au>
wrote:

> On 27/11/2017 4:06 pm, Stathis Papaioannou wrote:
>
> On 26 November 2017 at 13:33, < <agrayson2...@gmail.com>
> agrayson2...@gmail.com> wrote:
>
> You keep ignoring the obvious 800 pound gorilla in the room; introducing
>> Many Worlds creates hugely more complications than it purports to do away
>> with; multiple, indeed infinite observers with the same memories and life
>> histories for example. Give me a break. AG
>>
>
> What about a single, infinite world in which everything is duplicated to
> an arbitrary level of detail, including the Earth and its inhabitants, an
> infinite number of times? Is the bizarreness of this idea an argument for a
> finite world, ending perhaps at the limit of what we can see?
>
>
> That conclusion for the Level I multiverse depends on a particular
> assumption about the initial probability distribution. Can you justify that
> assumption?
>

The assumption is the Cosmological Principle, that the part of the universe
that we can see is typical of the rest of the universe. Maybe it's false;
but my question is, is the strangeness of a Level I multiverse an
*argument* for its falseness?


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Consistency of Postulates of QM

2017-11-26 Thread Stathis Papaioannou
On 26 November 2017 at 13:33, <agrayson2...@gmail.com> wrote:

You keep ignoring the obvious 800 pound gorilla in the room; introducing
> Many Worlds creates hugely more complications than it purports to do away
> with; multiple, indeed infinite observers with the same memories and life
> histories for example. Give me a break. AG
>

What about a single, infinite world in which everything is duplicated to an
arbitrary level of detail, including the Earth and its inhabitants, an
infinite number of times? Is the bizarreness of this idea an argument for a
finite world, ending perhaps at the limit of what we can see?


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Consistency of Postulates of QM

2017-11-26 Thread Stathis Papaioannou
On 24 November 2017 at 10:53, Bruce Kellett <bhkell...@optusnet.com.au>
wrote:

Hi Lawrence, and welcome to the 'everything' list. I have come here to
> avoid the endless politics on the 'avoid' list.
>

What is the "avoid" list?

-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Consistency of Postulates of QM

2017-11-21 Thread Stathis Papaioannou
On Tue, 21 Nov 2017 at 12:27 pm, Bruce Kellett <bhkell...@optusnet.com.au>
wrote:

> On 21/11/2017 11:37 am, Stathis Papaioannou wrote:
>
> On 21 November 2017 at 08:53, Bruce Kellett <bhkell...@optusnet.com.au>
> wrote:
>
>> On 20/11/2017 11:42 pm, Stathis Papaioannou wrote:
>>
>> On Sun, 19 Nov 2017 at 8:35 am, Bruce Kellett <
>> <bhkell...@optusnet.com.au>bhkell...@optusnet.com.au> wrote:
>>
>>> On 19/11/2017 12:15 am, Stathis Papaioannou wrote:
>>>
>>> On Sat, 18 Nov 2017 at 9:11 am, Bruce Kellett <
>>> <bhkell...@optusnet.com.au>bhkell...@optusnet.com.au> wrote:
>>>
>>>>
>>>> And exactly what is it that you claim has not been proved in MW theory?
>>>> Bell's theorem applies there too: it has never been proved that it does
>>>> not. Bell was no fool: he did not like MWI, but if that provided an escape
>>>> from his theorem, he would have addressed the issue. The fact that he did
>>>> not suggests strongly that you do not have a case.
>>>>
>>>
>>> Bell’s theory applies in the sense that the experimental results would
>>> be the same in MWI, but the FTL weirdness is eliminated. This is because in
>>> MWI the experimenter can’t prepare a random state,
>>>
>>>
>>> What do you mean by this? Are you claiming that there are no free
>>> variables in MWI? Some form of superdeterminism?
>>>
>>
>> Yes.
>>
>>
>> As far as I know, the only serious advocate of superdeterminism as an
>> account of QM is Gerard 't Hooft. Tim Maudlin analysed 't Hooft's arguments
>> in a long exchange with him on Facebook:
>>
>> https://www.facebook.com/tim.maudlin/posts/10155670157528398
>>
>> Maudlin's arguments was basically that the type of conspiracies that
>> would be required in the general case would be such, that if they were
>> generalized, they would render science and experimental confirmation of
>> theories meaningless.
>>
>> I think Maudlin is quite right here. Apart from the implication that
>> superdeterminism says that all our scientific theories are necessarily
>> incomplete, superdeterminism is not really an explanation of anything,
>> since anything you observe can be explained away in this way.
>>
>
> Maudlin also says this about EPR, Bell and MWI:
>
> --quote--
> Finally, there is one big idea. Bell showed that measurements made far
> apart cannot regularly display correlations that violate his inequality if
> the world is local. But this requires that the measurements have results in
> order that there be the requisite correlations. What if no “measurement”
> ever has a unique result at all; what if all the “possible outcomes” occur?
> What would it even mean to say that in such a situation there is some
> correlation among the “outcomes of these measurements”? This is, of course,
> the idea of the Many Worlds interpretation. It does not refute Bell’s
> analysis, but rather moots it: in this picture, phenomena in the physical
> world do not, after all, display correlations between distant experiments
> that violate Bell’s inequality, somehow it just seems that they do. Indeed,
> the world does not actually conform to the predictions of quantum theory at
> all (in particular, the prediction that these sorts of experiments have
> single unique outcomes, which correspond to eigenvalues), it just seems
> that way. So Bell’s result cannot get a grip on this theory.
> --endquote--
>
> https://arxiv.org/ftp/arxiv/papers/1408/1408.1826.pdf
>
>
> It is a pity that you did not complete the quotation Immediately
> following the passage you quote above, Maulin says:
>
> "That does not prove that Many Worlds is local: it just shows that Bell's
> result does not prove that it isn't local. In order to even address the
> question of the locality of Many Worlds a tremendous amount of interpretive
> work has to be done. This is not the place to attempt such a task."
>

Well yes, it could be that there are other reasons why MWI is not local,
but Maudlin agrees that EPR is not one of them.

Thde misrepresentation of Maudlin's position appears to be quite common in
> the Many Worlds community. I don't think Maudlin is completely correct in
> his idea that Bell' result cannot get a grip on the theory -- it can if one
> understands many worlds in terms of superpositions of possible outcomes.
> But that is by the way. What I have presented is a concrete counterexample
> to the contention that Many Worlds is local. Maudlin does not consider this
> counterexample, so that does rather render his comments on 

Re: Consistency of Postulates of QM

2017-11-20 Thread Stathis Papaioannou
On 21 November 2017 at 08:53, Bruce Kellett <bhkell...@optusnet.com.au>
wrote:

> On 20/11/2017 11:42 pm, Stathis Papaioannou wrote:
>
> On Sun, 19 Nov 2017 at 8:35 am, Bruce Kellett <bhkell...@optusnet.com.au>
> wrote:
>
>> On 19/11/2017 12:15 am, Stathis Papaioannou wrote:
>>
>> On Sat, 18 Nov 2017 at 9:11 am, Bruce Kellett <bhkell...@optusnet.com.au>
>> wrote:
>>
>>>
>>> And exactly what is it that you claim has not been proved in MW theory?
>>> Bell's theorem applies there too: it has never been proved that it does
>>> not. Bell was no fool: he did not like MWI, but if that provided an escape
>>> from his theorem, he would have addressed the issue. The fact that he did
>>> not suggests strongly that you do not have a case.
>>>
>>
>> Bell’s theory applies in the sense that the experimental results would be
>> the same in MWI, but the FTL weirdness is eliminated. This is because in
>> MWI the experimenter can’t prepare a random state,
>>
>>
>> What do you mean by this? Are you claiming that there are no free
>> variables in MWI? Some form of superdeterminism?
>>
>
> Yes.
>
>
> As far as I know, the only serious advocate of superdeterminism as an
> account of QM is Gerard 't Hooft. Tim Maudlin analysed 't Hooft's arguments
> in a long exchange with him on Facebook:
>
> https://www.facebook.com/tim.maudlin/posts/10155670157528398
>
> Maudlin's arguments was basically that the type of conspiracies that would
> be required in the general case would be such, that if they were
> generalized, they would render science and experimental confirmation of
> theories meaningless.
>
> I think Maudlin is quite right here. Apart from the implication that
> superdeterminism says that all our scientific theories are necessarily
> incomplete, superdeterminism is not really an explanation of anything,
> since anything you observe can be explained away in this way.
>

Maudlin also says this about EPR, Bell and MWI:

--quote--
Finally, there is one big idea. Bell showed that measurements made far
apart cannot regularly display correlations that violate his inequality if
the world is local. But this requires that the measurements have results in
order that there be the requisite correlations. What if no “measurement”
ever has a unique result at all; what if all the “possible outcomes” occur?
What would it even mean to say that in such a situation there is some
correlation among the “outcomes of these measurements”? This is, of course,
the idea of the Many Worlds interpretation. It does not refute Bell’s
analysis, but rather moots it: in this picture, phenomena in the physical
world do not, after all, display correlations between distant experiments
that violate Bell’s inequality, somehow it just seems that they do. Indeed,
the world does not actually conform to the predictions of quantum theory at
all (in particular, the prediction that these sorts of experiments have
single unique outcomes, which correspond to eigenvalues), it just seems
that way. So Bell’s result cannot get a grip on this theory.
--endquote--

https://arxiv.org/ftp/arxiv/papers/1408/1408.1826.pdf

> But for Bell-type experiments in MWI, or elsewhere, one does not have to
>> prepare a random state -- one just prepares a singlet state consisting of
>> two entangled particles. Nothing random about it.
>>
>
> Then one makes a measurement, the outcome of which is uncertain until it
> is done, but - surprisingly - the distal particle seems to “know” about it
> instantaneously. In the MWI there is no uncertainty about the measurement
> in the multiverse as a whole, although there is uncertainty from the point
> of view of individual observers, because they do not know in which branch
> they will end up in.
>
> Bell actually thought that Bohm's deterministic, though non-local, theory
>> was a better bet. But you have not addressed my counterexample to your
>> contention that MWI eliminates non-locality. The time-like measurement of
>> the two entangled particles clearly requires non-locality in order to
>> conserve angular momentum.
>>
>
> There is no question of the distal entangled particle instantaneously
> reacting to a measurement of the proximal particle to conserve angular
> momentum, because the outcome of the measurement was already fixed.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.

Re: Consistency of Postulates of QM

2017-11-20 Thread Stathis Papaioannou
On Sun, 19 Nov 2017 at 8:35 am, Bruce Kellett <bhkell...@optusnet.com.au>
wrote:

> On 19/11/2017 12:15 am, Stathis Papaioannou wrote:
>
> On Sat, 18 Nov 2017 at 9:11 am, Bruce Kellett <
> <bhkell...@optusnet.com.au>bhkell...@optusnet.com.au> wrote:
>
>>
>> And exactly what is it that you claim has not been proved in MW theory?
>> Bell's theorem applies there too: it has never been proved that it does
>> not. Bell was no fool: he did not like MWI, but if that provided an escape
>> from his theorem, he would have addressed the issue. The fact that he did
>> not suggests strongly that you do not have a case.
>>
>
> Bell’s theory applies in the sense that the experimental results would be
> the same in MWI, but the FTL weirdness is eliminated. This is because in
> MWI the experimenter can’t prepare a random state,
>
>
> What do you mean by this? Are you claiming that there are no free
> variables in MWI? Some form of superdeterminism?
>

Yes.

But for Bell-type experiments in MWI, or elsewhere, one does not have to
> prepare a random state -- one just prepares a singlet state consisting of
> two entangled particles. Nothing random about it.
>

Then one makes a measurement, the outcome of which is uncertain until it is
done, but - surprisingly - the distal particle seems to “know” about it
instantaneously. In the MWI there is no uncertainty about the measurement
in the multiverse as a whole, although there is uncertainty from the point
of view of individual observers, because they do not know in which branch
they will end up in.

Bell actually thought that Bohm's deterministic, though non-local, theory
> was a better bet. But you have not addressed my counterexample to your
> contention that MWI eliminates non-locality. The time-like measurement of
> the two entangled particles clearly requires non-locality in order to
> conserve angular momentum.
>

There is no question of the distal entangled particle instantaneously
reacting to a measurement of the proximal particle to conserve angular
momentum, because the outcome of the measurement was already fixed.

>

-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Consistency of Postulates of QM

2017-11-18 Thread Stathis Papaioannou
 the experimenter can’t prepare a random state, since there is no true
randomness, and therefore there is no question of the entangled particle
magically knowing what has happened at the other end. Bell thought,
apparently, that MWI weirdness was more weird than FTL weirdness and
rejected MWI even though it solved this problem.

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Consistency of Postulates of QM

2017-11-14 Thread Stathis Papaioannou
On Mon, 13 Nov 2017 at 8:54 am, Bruce Kellett <bhkell...@optusnet.com.au>
wrote:

>
> I don't think you have fully understood the scenario I have outlined.
> There is no collapse, many worlds is assumed throughout. Alice splits
> according to her measurement result. Both copies of Alice go to meet
> Bob, carrying the other particle of the original pair. Since they both
> have now met Bob, the split that Alice occasioned has now spread to
> entangle Bob as well as the rest of her environment. So there are now
> two worlds, each of which has a copy of Bob, and an Alice, who has a
> particular result. Locality says that Bob's particle is unchanged from
> production, so when he measure its spin, he splits into two copies,
> according to spin up or spin down. Since Alice is standing beside him,
> she also becomes entangled with his result. But Alice already has a
> definite result in each branch, so we now have four branches: with
> results 'up-up', 'up-down', 'down-up', and 'down-down'. However, only
> the 'up-down' and 'down-up' branches conserve angular momentum. How do
> you rule out the other branches?


When you put something in the cupboard and come back later to get it, why,
under MWI, is it still there?

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: An AI program that teaches itself

2017-10-28 Thread Stathis Papaioannou
On Sat, 28 Oct 2017 at 3:30 am, John Clark <johnkcl...@gmail.com> wrote:

> On Thu, Oct 26, 2017 at 4:33 PM, Brent Meeker <meeke...@verizon.net>
> wrote:
>
> ​> ​
>> There are a lot of other painkillers
>>
>
> But
> ​ ​
> marijuana
> ​ ​
> is the only painkiller I know of that has a 0% chance of death by
> overdose, and yet it is illegal to use in most states, even for
> ​cancer​
>  patients in agony.  But Aspirin is legal
> ​and​
>  that can
> ​kill​
>  you,
> ​and ​
> Oxycodone
> ​ is legal too and
> if
> ​the pain is too strong for ​
> Aspirin the law encourages you to switch to
> ​that
>
> ​or some ​
> other
> ​ ​
> opioid which is projected to kill 500,000 Americans in the next decade
> ​. And if you defy the law and choose marijuana instead the law will do
> its best to see to it that your last days are not only spent in agony they
> will also be spent in jail.
>
> This is the Trump administration's idea of getting government off out
> backs.
>

It’s important not to demonise opioids. In acute, severe pain they are
often the only thing that works, and denying them to a suffering patient is
inhumane. In chronic pain, their use is more controversial. Perhaps not
widely known is that in a way they are very safe drugs in that they do not
cause end organ damage, unlike, say, alcohol or tobacco. Elephant-killing
doses of fentanyl are used in cardiac surgery, and as long as respiration
is supported, the patient wakes up fine. The problem is that some people
(not all) enjoy the euphoric effect so much that they misuse them, leading
to tolerance, dose escalation and risk of overdose.

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: When you split the brain, do you split the person?

2017-10-03 Thread Stathis Papaioannou
On Tue, 3 Oct 2017 at 8:11 am, Telmo Menezes <te...@telmomenezes.com> wrote:

> I think this is quite interesting, although the article is a bit
> superficial.
>
> https://aeon.co/ideas/when-you-split-the-brain-do-you-split-the-person
>
> If the conclusions are valid, I would say they put emergentism in
> trouble...


While the research the article refers to is interesting, I don’t see why it
should have any bearing on the question of consciousness. All the same
questions (about how sense data could be integrated etc.) could be raised
if the split brain subjects were assumed to be philosophical zombies.

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: A profound lack of profundity (and soon "the starting point")

2017-10-01 Thread Stathis Papaioannou
On Sat, 30 Sep 2017 at 3:48 pm, John Clark <johnkcl...@gmail.com> wrote:

I have a thought experiment of my own and this is the protocol:
>
> 1) I have *TWO* coins, a regular coin and a two headed coin.
> 2) I flip both coins.
> 3) Predict if *the one and only coin* will land heads or tails.
>
> You can't predict it because of coin indeterminacy. Is it too early to
> start writing my Nobel Prize acceptance speech?
>

That question can’t be answered because if there are two coins there can’t
also be one and only one coin. Similarly, if John Clarke is duplicated to
two cities then it doesn’t make sense to ask which one and only one city
will end up with a John Clark in it. But this is NOT the same as asking
which one and only one city will John Clark see, from his own point of
view. You have been through this before countless times, if some version of
the multiverse is true, and you know that you only end up in one city from
your own point of view, despite how many copies of you are out there.
-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: A profound lack of profundity (and soon "the starting point")

2017-09-30 Thread Stathis Papaioannou
On Sat, 30 Sep 2017 at 11:07 am, John Clark <johnkcl...@gmail.com> wrote:

> On Fri, Sep 29, 2017 at 8:21 PM, Stathis Papaioannou <stath...@gmail.com>
> wrote:
>
> ​> ​
>> There could be an infinite number of copies but each one of them will
>> have THE first person perspective.
>>
>
> ​
> True. And for that very reason asking "What one and only one city will *I*
> see tomorrow from *THE*
> *​* ​
> first person perspective
> ​ ​
> after *I* have been duplicated a infinite number of times ?" would be an
> astronomically silly thing to say because nobody knows who
> ​ ​
> Mr. *THE*  is, not even Mr. *I*.
> ​
>

You know with certainty that there will be multiple versions claiming to
have a first person experience, but there will only be one and only one
“I”. You know that in the same way as you know it now: there are lots of
people all around you claiming to have first person experience, all in
different places, and maybe other John Clarks, but there is only one you.

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: A profound lack of profundity (and soon "the starting point")

2017-09-29 Thread Stathis Papaioannou
On Fri, 29 Sep 2017 at 7:39 pm, John Clark <johnkcl...@gmail.com> wrote:

> On Thu, Sep 28, 2017 at 11:48 PM, Terren Suydam <terren.suy...@gmail.com>
> wrote:
>
> ​> ​
>> This thought experiment must be analyzed from the first person perspective
>>
>
> ​There is no *THE* ​
> first person perspective
> ​ if ​
> first person perspective
> ​ duplicating machines exist! It's the same blunder over and over and
> over again.
>

There could be an infinite number of copies but each one of them will have
THE first person perspective.

​> ​
>> (and by that I'm referring to the grammatical person
>> <https://en.wikipedia.org/wiki/Grammatical_person>).
>>
>
> ​
> I
> ​would bet money that ​
>  the third grade English teacher
> ​that ​
> wrote that article did not have first person perspective
> ​ ​
> duplicating machines
> ​ ​
> in mind.
>
>
>> ​> ​
>> There is only one stream of consciousness, ever,
>>
>
> ​Then why can't anybody *ever* tell me if that ​
>  one stream of consciousness
> ​ is in Moscow or Washington?​
>
>
>> ​> ​
>> despite the possibility of its bifurcation (no different from many-worlds)
>>
>
> ​In ​
> many-worlds
> ​ the meaning of personal pronouns are always clear, in Bruno's thought
> experiment ​they never are.
>
>
>> ​> ​
>> The only reality a person experiences is the one inside their head.
>> Thanks to this, we never have to get into pronouns
>
>
> Then why is ​
> Terren Suydam
> ​ unable to state ​
> Terren Suydam
> ​'s ideas without the constant use of personal pronouns and the misuse of
> articles like "the" and "a"?
>
>
> ​> ​
>> You seem to have a hang-up that prevents you from adopting that
>> perspective
>>
>
> ​My ​
> hang-up
> ​ is I don't know what ​
> perspective
> ​ you're talking about and neither do you.​
>
>
>> ​> ​
>> you compulsively return to questions about the objective reality,
>>
>
> ​Objective reality is important but subjective reality is even more
> important. There is only one objective reality but there are billions of
> subjective realities, so a question about subjective reality needs to
> specify which one it's referring to, and the way English grammar uses
> personal pronouns just can't do that if people duplicating machines are in
> the mix.
>
>
>> ​> ​
>> talking in terms of multiple consciousnesses,
>>
>
> ​How can I not talk about ​
> multiple consciousnesses
> ​ if you're talking about people duplicating machines?  ​
>
>
>> ​> ​
>> and getting confused about the referents of grammatical conventions.
>>
>
> ​I plead guilty to that charge, I am VERY confused ​
>
> ​about what you're talking about because you're using ​
> grammatical conventions
> ​ just as people have been using for centuries, but for centuries there
> has been no people duplicating machines. A century ago "What one and only
> one city will I see tomorrow?"  was a real question with a real answer
> because the meaning of the personal pronoun "I" was clear,
>  but a century from now "Tomorrow
> I
> ​will see
> ​ one and only one city after I have become two, what is the name of that
> one city I will see?" would just be ridiculous. ​
>
> Is it really your position that the English language will need
> no modification on how it uses personal pronouns even
> after people duplicating machines become common?
>
>
>> ​> ​
>> And you blame that gibberish on the thought experiment itself,
>>
>
> ​If it's not gibberish then what in the world is the above "question"
> asking? Who is the referent to the personal pronoun "I" in the phrase ​
>
> ​"​
> I
> ​will see ​
> tomorrow
> ​"​
> ​ if "I" am to be duplicated today?
>
>
>> ​> ​
>> you've lost the plot.
>>
>
> ​Gibberish has no plot.​
>
>
>
>> ​> ​
>> If you want to continue this, great, but I'm not going to go around in
>> circles
>>
>
> ​You could still participate,  you could just do what Bruno does and
> chant the mantra "you confuse the 3p and the 1p",  that won't take up much
> of your time.​
>
>
> ​John K Clark​
>
>
>
>
>
>
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>
-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: A profound lack of profundity (and soon "the starting point")

2017-09-28 Thread Stathis Papaioannou
On Wed, 27 Sep 2017 at 8:02 pm, John Clark <johnkcl...@gmail.com> wrote:

> On Tue, Sep 26, 2017 at 8:46 PM, Stathis Papaioannou <stath...@gmail.com>
> wrote:
>
> ​> ​
>>> ​I do expect to survive the
>>> copying process
>>> ​, even better I expect I'll have a backup, although why my expectations
>>> should be of interest to anyone but me I don't know. ​
>>>
>>
>> Then the question “what future experiences will I have” is not
>> nonsensical.
>>
>
> ​It's not ​
> nonsensical
> ​ in our everyday world ​to ask "What one and only one city will I see
> tomorrow?" because it's clear what "I" will mean tomorrow, but people
> duplicating machines don't yet exist in our everyday world because of
> technological, not philosophical, limitations. In our everyday world the I
> of tomorrow has a unique unambiguous meaning, the only being tomorrow that
> will remember being John Clark today.
>
>
>> ​> ​
>> If it were then I could not have the expectation of surviving,
>>
>
> ​The nonsense question is NOT "Will I survive tomorrow after I have been
> duplicated?", that is a real question with a real answer; and it is yes
> because something  (actually 2 things) tomorrow will remember being John
> Clark today. The nonsense question is "What one and only one city will I
> see tomorrow after I have been duplicated?" ​
>
>

The question is “what city will I see tomorrow”. You know what a city is,
you know what seeing is, you agree that I will survive so “I” has meaning
for you. It seems that you agree the question is meaningful, you just don’t
agree that I will see one city - which means that I will see two cities.
But “I” is singular, and a single person cannot see both cities, as a
matter of empirical fact.

​> ​
>> I could not conceive of having future experiences if “I” loses meaning
>> when I contemplate the post-duplication future.
>>
>
> ​Sure you can, you can conceive of being in Santa Clauses's workshop if
> you want; imagination is not limited by reality.
>

Imagination is limited by logic - I can’t imagine a square circle because
it is meaningless. But I can imagine seeing one or other city with 1/2
probability; it is meaningful, it is what I anticipate will happen, and it
is consistent with the reports of copies  who have been through duplication
multiple times.

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: The Finney phone

2017-09-27 Thread Stathis Papaioannou
On Thu, 28 Sep 2017 at 12:52 am, Russell Standish <li...@hpcoders.com.au>
wrote:

> People here might be interested to know that an early contributer on
> this list, Hal Finney has just had a mobile phone named after him:
>
>
> https://www.engadget.com/2017/09/26/blockchain-smartphone-sirin-finney-solarin/


Also, people might be interested to know that Ether, the cryptocurrency of
the Ethereum platform, a distributed computer platform that runs autonomous
contracts, is divided into subunits of ”wei” and “finney”. Wei Dai was the
founder of this list.
-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: A profound lack of profundity (and soon "the starting point")

2017-09-26 Thread Stathis Papaioannou
On Wed, 27 Sep 2017 at 1:48 am, John Clark <johnkcl...@gmail.com> wrote:

>
> On Tue, Sep 26, 2017 at 7:33 PM, Stathis Papaioannou <stath...@gmail.com>
> wrote:
>
> ​> ​
>> It seems that you would want your assets distributed to the copies,
>> ideally both of them, if not both then one, randomly chosen (“it doesn’t
>> matter which one”).
>
>
> ​Yes. I want somebody tomorrow who remembers being me today because I
> prefer existence to nonexistence,  ​others may have a different preference
> and that's OK because there is no disputing matters of taste.
>
> ​> ​
>> That’s what someone would do if they expected to survive the copying
>> process.
>
>
> ​I do expect to survive the
> copying process
> ​, even better I expect I'll have a backup, although why my expectations
> should be of interest to anyone but me I don't know. ​
>

Then the question “what future experiences will I have” is not nonsensical.
If it were then I could not have the expectation of surviving, since to
survive I must have future experiences, and I could not conceive of having
future experiences if “I” loses meaning when I contemplate the
post-duplication future.

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: A profound lack of profundity (and soon "the starting point")

2017-09-26 Thread Stathis Papaioannou
On Tue, 26 Sep 2017 at 4:44 pm, John Clark <johnkcl...@gmail.com> wrote:

> On Tue, Sep 26, 2017 at 7:48 AM, Stathis Papaioannou <stath...@gmail.com>
> wrote:
>
>
> ​> ​
>> Asking about your expectations is an attempt to show what your implicit
>> beliefs about your future are.
>>
>
> OK, If you say "What one and only one city do you expect to
> ​see​
> ​
> after you walk into the
> ​ ​
> that "you" duplicating machine?" I would remain silent because it is not
> my habit to respond to any old string of words, not even if the string of
> words are placed in a
> ​ ​
> grammatically correct order, not even if there is a question mark at the
> end of
> ​that​
> ​
> string. I can't give a answer if I don't know the question and I don't.
>
> ​> ​
>> if you are duplicated in Washington and Moscow would you like your assets
>> to be distributed 50/50 to the copies
>>
>
> ​That would depend on my personal
> idiosyncrasies
> ​ and also​
>  on how rich I was, if I was only living at the subsistence level I'd want
> all my assets to go to only one of the copies, it doesn't matter which one,
> because that way at least one of them would survive; if I were a
> billionaire I might prefer a different arrangement, and you might like
> something else entirely. Who cares? There is no disputing matters of taste.
> I thought we were interested in grand philosophical ideas and the nature
> of reality not the trivial likes and dislikes of individuals.
>

It seems that you would want your assets distributed to the copies, ideally
both of them, if not both then one, randomly chosen (“it doesn’t matter
which one”). That’s what someone would do if they expected to survive the
copying process.

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: A profound lack of profundity (and soon "the starting point")

2017-09-26 Thread Stathis Papaioannou
On Mon, 25 Sep 2017 at 7:51 pm, John Clark <johnkcl...@gmail.com> wrote:

> On Mon, Sep 25, 2017 at 9:47 AM, Terren Suydam <terren.suy...@gmail.com>
> wrote:
>
> ​> ​
>> Then we agree that expectations are important, since the wrong ones can
>> kill us.
>>
>
> ​
> Forget important, expectations are not even meaningful in thought
> experiments involving people duplicating machines if
> ​ ​
> it is not clearly stated what is being expected. And if there is no way to
> tell if the prediction made
> ​ ​
> before the duplication turned out to be correct or not even AFTER the
> duplication is completed because of the frequent use of personal pronouns
> in a world that contains personal pronoun duplicating machines
> ​ ​
> then the entire exercise is useless.
>

Asking about your expectations is an attempt to show what your implicit
beliefs about your future are. You explicitly state that you have no
beliefs about your future in duplication experiments, because, you claim,
the question is meaningless. But what implications does this have have for
your decisions when faced with one of these experiments? For example, if
you are duplicated in Washington and Moscow would you like your assets to
be distributed 50/50 to the copies, or would you prefer that you be
declared legally dead and your assets distributed assets distributed to
your heirs as set out in your will?
-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: A profound lack of profundity (and soon "the starting point")

2017-09-05 Thread Stathis Papaioannou
On Wed, 6 Sep 2017 at 1:52 am, John Clark <johnkcl...@gmail.com> wrote:

> On Mon, Sep 4, 2017 at 11:35 PM, Stathis Papaioannou <stath...@gmail.com>
> wrote:
>
> ​> ​
>> It seems that you have no problem with 1:1 duplication - you agree that
>> you survive, just as if you had travelled by plane.
>>
>
> ​Certainly. And if you put me on a plane and I asked "Where is this plane
> going, what one and only one city will I be in tomorrow?"​ then that would
> be a legitimate  question because duplication was not involved so the
> meaning of the personal pronoun "I" is clear.
>
>
> ​> ​
>> So what is your position on 1:2 duplication: is it a death sentence?
>>
>
> ​Certainly not. But if I said "after I am duplicated ​and become 2 what
> one and only one city will I be in?" that would not be a question, that
> would just be noises made with my mouth with no more meaning than a burp,
> another noise made with the mouth.
>

How can you anticipate future experiences if it is impossible to even think
about it?

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: A profound lack of profundity (and soon "the starting point")

2017-09-04 Thread Stathis Papaioannou
On Tue, 5 Sep 2017 at 12:26 pm, John Clark <johnkcl...@gmail.com> wrote:

> On Mon, Sep 4, 2017 at 2:36 PM, Bruno Marchal <marc...@ulb.ac.be> wrote:
>
>
>> ​> ​
>> They both say that the reconstitution has been enough good for them, and
>> both agree that among the W and the M experiences, they live only one of
>> them, and that they could not have figure out which one before the
>> duplication,
>>
>
> ​
> Of course they couldn't have figured out which one before the duplication,
> they couldn't figure out ANYTHING before the duplication because they
> didn't exist before
> ​t​
> he duplication
> ​
> !
>

It seems that you have no problem with 1:1 duplication - you agree that you
survive, just as if you had travelled by plane. So what is your position on
1:2 duplication: is it a death sentence?
-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: A profound lack of profundity

2017-08-23 Thread Stathis Papaioannou
On Thu, 24 Aug 2017 at 8:07 am, John Clark <johnkcl...@gmail.com> wrote:

> On Wed, Aug 23, 2017  Stathis Papaioannou <stath...@gmail.com> wrote:
>
>
> ​> ​
>> If I say, "when you see the light turn red, stop walking", the "you"
>> refers to anyone who hears the sentence.
>>
>
> ​"You" could be replaced by "
>  Stathis
> ​ or John or Bruno or..." and continue on until every human on the planet
> was listed, but that might be a tad cumbersome so personal pronouns were
> invented. It was a good invention, they save time and up to now have caused
> few problems, but then up to now nobody has invented a
> people duplicating machine. ​
>

So no grammatical problem when the referent if the pronoun changes - that's
what pronouns are for.

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: A profound lack of profundity

2017-08-23 Thread Stathis Papaioannou
On Thu, 24 Aug 2017 at 1:03 am, John Clark <johnkcl...@gmail.com> wrote:

>
> On Tue, Aug 22, 2017 at 11:11 PM, Stathis Papaioannou <stath...@gmail.com>
> wrote:
>
> ​> ​
>>  Even without duplication, there is no rigid 1:1 connection between
>> pronouns and proper nouns.
>
>
> ​
> Of course there is, or rather there should be. Sometimes i
> ​n​
> very bad writing there is a long convoluted sentence that refers to many
> people and concludes with something like "then he shot him" when it's not
> clear who shot who because it's not clear which one was "he" and which one
> was "him".
> ​ ​
> Personal pronouns are just a shorthand name for a particular person
> ​ ​
> that is
> ​ ​
> used to save time when the meaning of that nickname is obvious. But when
> the engineering obstacles are overcome and person duplicating machines
> become practical the meaning of personal pronouns will *NEVER* be
> obvious, and that's when the English language will need a major overhaul.
>

If I say, "when you see the light turn red, stop walking", the "you" refers
to anyone who hears the sentence.

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: A profound lack of profundity

2017-08-22 Thread Stathis Papaioannou
On Wed, 23 Aug 2017 at 9:13 am, John Clark <johnkcl...@gmail.com> wrote:

>
> ​Today the day after the experiment was completed ask  Mr. W  2 questions:
>
> 1) Are you ​Bruno Marchal?
> 2) Do you see W?
>
> And then ask Mr. M 2 questions:
>
> 1) Are you ​Bruno Marchal?
> 2) Do you see M?
>
> If the answer in no for any of those 4 questions then my prediction made
> yesterday that Bruno Marchal will see 2 cities turned out to be wrong. But
> I don't think they will say no. Do you?
>

"I" is a singular pronoun. If there is a duplication it is correct for
Bruno Marchal to say "I will see one city", or "we will see two cities", or
"the copies of Bruno Marchal will see two cities", but not "I will see two
cities". Even without duplication, there is no rigid 1:1 connection between
pronouns and proper nouns.

>

-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: A profound lack of profundity

2017-08-21 Thread Stathis Papaioannou
On Tue, 22 Aug 2017 at 10:00 am, John Clark <johnkcl...@gmail.com> wrote:

> On Mon, Aug 21, 2017  Stathis Papaioannou <stath...@gmail.com> wrote:
>
>> ​>​
>> Which one and only one outcome will I see when I toss the coin?
>>
>
> ​I can't tell you today, but tomorrow after the flip I
> 'll be able to say what the correct answer would have been. ​So it was a
> real question with a real answer, I just didn't know what it was yesterday.
>
>

But if the world split when you toss a coin, you would just give up on
English?

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: A profound lack of profundity

2017-08-21 Thread Stathis Papaioannou
On Tue, 22 Aug 2017 at 8:39 am, John Clark <johnkcl...@gmail.com> wrote:

> On Mon, Aug 21, 2017  Stathis Papaioannou <stath...@gmail.com> wrote:
>
> ​> ​
>>>> While the outcome is certain for you, it is not certain for me.
>>>>
>>>
>>> ​That's because the meaning of the personal pronoun "me" will always be
>>> uncertain in ​a world that contains "me" duplicating machines.
>>>
>>>
>>
>> And this uncertainty constitutes the irreducible indeterminacy.
>>
>
> ​It signifies nothing more profound than poor writing​ and silly pronoun
> usage.
>
> ​> ​
>> After the duplication there will be two copies, one with $2 and the other
>> with nothing. Neither of them has the same amount of money as before.
>>
>
> ​OK. So what?​
>
>
> ​>>​
>>> ​As so often happens in this ​thread nobody can say if the above is
>>> true or not because nobody knows who Mr. I is.
>>>
>>>
>>
>> ​> ​
>> Mr I is the person who remembers going into the duplicator,
>>
>
> ​Then Mr.I will see *2* cities. QED​
>
>
> ​>>​
>>  and there are two of them. Mr I has gone through this many times and
>> knows that half the time he ends up in Washington with $2 and half the time
>> in Moscow with no money, hence next time he enters the duplicator he
>> believes he has s 1/2 chance of doubling his money or losing it. You tell
>> him he is not even wrong,
>
>
> ​I would never tell him that after he went through the duplicator, but I
> would tell him that before.​
>
>
>> ​>> ​
>>> ​A game involves skill and gambling involves probability and this
>>> pointless procedure involves neither. ​
>>>
>>>
>>
>> ​> ​
>> Games can be games of chance rather than skill.
>>
>
> ​But chance is not involved, everything is 100% predicable and it's not
> even difficult.
>

It's 100% predictable for an external observer, but not for the person
going through duplication, who feels that he has survived as one or other
of the duplicated. You, John Clark, would say the same thing if you went
through duplication: "I thought before the duplication that I would turn
into a soup of gibberish and nonsense, but surprisingly, here I am in
Washington, and there is another copy of me in Moscow".

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: A profound lack of profundity

2017-08-21 Thread Stathis Papaioannou
On Tue, 22 Aug 2017 at 8:12 am, John Clark <johnkcl...@gmail.com> wrote:

> On Mon, Aug 21, 2017 at , Stathis Papaioannou <stath...@gmail.com> wrote:
>
>
>> ​>
>>>> ​>>​
>>>> ​
>>>>  Why does this make the question not a question?
>>>
>>>
>>> ​>>​
>>> ​Because the string of words with a question mark at the end was "What
>>> is the name of the one and only one city I will see after I become two?". ​
>>>
>>
>> ​> ​
>> And the answer is, "Either Washington or Moscow, but not both".
>>
>
> ​I didn't know ​ "Either Washington or Moscow, but not both" was the name
> of a city, seems like quite a mouthful. Where is this one and only one
> oddly named city? Is "Either Washington or Moscow, but not both" in the
> USA or in Russia?
>

Which one and only one outcome will I see when I toss the coin?
Either heads or tails but not both.

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: A profound lack of profundity

2017-08-21 Thread Stathis Papaioannou
On 21 August 2017 at 11:16, John Clark <johnkcl...@gmail.com> wrote:

> On Sun, Aug 20, 2017  Stathis Papaioannou <stath...@gmail.com> wrote:
>
> ​> ​
>> There are two people after the event,
>
>
> ​Yes.
> ​
>
>
>> ​> ​
>> and each has his own answer about which one and only one city he sees,
>
>
> ​Yes.​
>
>
>> ​>​
>>  Why does this make the question not a question?
>
>
> ​Because the string of words with a question mark at the end was "What is
> the name of the one and only one city I will see after I become two?". ​
>

And the answer is, "Either Washington or Moscow, but not both". Many people
have tried it and they all agree: "I ended up in Washington about half the
time and Moscow about half the time, but never both". Naturally, they will
expect next time they go through that they will end up in one or other city
with 1/2 probability. You will tell them that they are talking gibberish,
but they will ignore you.

>


-- 
Stathis Papaioannou
-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: A profound lack of profundity

2017-08-20 Thread Stathis Papaioannou
On 21 August 2017 at 11:08, John Clark <johnkcl...@gmail.com> wrote:

> On Sun, Aug 20, 2017 , Stathis Papaioannou <stath...@gmail.com> wrote:
>
>
>> ​> ​
>> While the outcome is certain for you, it is not certain for me.
>>
>
> ​That's because the meaning of the personal pronoun "me" will always be
> uncertain in ​a world that contains "me" duplicating machines.
>
>

And this uncertainty constitutes the irreducible indeterminacy.

​> ​
>> One copy of me will win $1 and the other copy will lose $1.
>>
>
> ​Then ​
> Stathis Papaioannou
> ​ will have the same amount of money after the duplication as before.  ​
>

After the duplication there will be two copies, one with $2 and the other
with nothing. Neither of them has the same amount of money as before.

​> ​
>> Anticipating the future prior to duplication, I have a 1/2 chance of
>> doubling my money or losing it. If I don't bet, I will certainly have $1
>> before and after duplication.
>>
>
> ​As so often happens in this ​thread nobody can say if the above is true
> or not because nobody knows who Mr. I is.
>
>

Mr I is the person who remembers going into the duplicator, and there are
two of them. Mr I has gone through this many times and knows that half the
time he ends up in Washington with $2 and half the time in Moscow with no
money, hence next time he enters the duplicator he believes he has s 1/2
chance of doubling his money or losing it. You tell him he is not even
wrong, he is considering a meaningless question, but he just laughs at you.

A "fair game" when gambling is one where neither side has an advantage.
>>
>
> ​A game involves skill and gambling involves probability and this
> pointless procedure involves neither. ​
>
>

Games can be games of chance rather than skill. A fair game is one in which
the expected gain for the player in the long run is zero.

https://en.m.wikipedia.org/wiki/Gambling_mathematics

>


-- 
Stathis Papaioannou
-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: A profound lack of profundity

2017-08-20 Thread Stathis Papaioannou
it's your thought experiment ​not mine.
>
>
>> ​> ​
>> but it is obvious that when we assume mechanism, the question makes
>> perfect sense.
>>
>
> ​Then now that it's all over and you know all there is to know you should
> have a perfect one word answer to the question. ​So let's hear it!
>
>
>> You know with certainty (given what we have accepted) that after pushing
>> on the button, you (whoever you can possibly feel to be) will see only one
>> city, and this without knowing which one in Helsinki.
>>
>
> ​I'm not talking about Helsinki, that was BEFORE the experiment! ​I want
> to know what new thing you've learned from the experiment now that it's
> completed. I what to know what your new improved answer is.
>
> John K Clark
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Stathis Papaioannou
-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: A profound lack of profundity

2017-08-20 Thread Stathis Papaioannou
On Mon, 21 Aug 2017 at 2:27 am, John Clark <johnkcl...@gmail.com> wrote:

> On Sun, Aug 20, 2017 at 2:18 AM, Stathis Papaioannou <stath...@gmail.com>
> wrote:
>
> ​> ​
>> Let me explain the bet more clearly. I will be duplicated tomorrow in
>> Moscow and Washington. I have $1 in my pocket, and this will be duplicated
>> with me. (Assume this is legal provided that the money is duplicated along
>> with the owner). The bet I make is thatafter the duplication, I will pay
>> you $1 and you (who will not be duplicated) will pay me $2 if I am in
>> Washington,
>>
> ​ ​
>> and nothing if I am in Moscow.
>> ​ ​
>> The "I" here, to spell it out in case you are confused, is anyone who
>> remembers being me and making the bet.
>>
>
> ​
> Let's
> ​follow the money​
> , assuming "me" also means "anyone who remembers being me and making the
> bet" then Mr. Me must pay John Clark $2 and John Clark must pay Mr. Me $2.
> We can now calculate the net result of all this with 100% certainty, John
> Clark ended up with exactly the same amount of money in his pocket at the
> end
> ​of the day ​
> that he had
> ​at​
>  the beginning, but Mr. Me ended up with twice as much money in his
> pockets due to the
> ​fact​
> ​
> that the machine can not only duplicate people it can duplicate dollar
> bills too. If John Clark were smart he'd refuse to be a collaborator
> ​​
> in this counterfeiting
> ​operation​
>  unless he got a bigger slice of the action.
>

What to do with a person's assets in the event of duplication is a separate
topic. For now, assume that central banks allow duplication of money only
if its owner is also duplicated.

While the outcome is certain for you, it is not certain for me. One copy of
me will win $1 and the other copy will lose $1. Anticipating the future
prior to duplication, I have a 1/2 chance of doubling my money or losing
it. If I don't bet, I will certainly have $1 before and after duplication.
You say "Mr Me ended up twice as much money" after duplication, but that is
not how the copies see it.


> ​> ​
>> This is a fair game,
>>
>
> ​It's not a game because skill is not involved and it's
> not a bet because probability
>
> ​is not involved. All that happens is that dollar bills are passed around
> in a circle and at the end of it absolutely nothing has changed for John
> Clark but Mr. Me has twice as much money. All this can be predicted with
> 100% certainty and we can also predict that It's all a bit silly. I'll be
> dammed if I can find any philosophical significance in it.  ​
>

A "fair game" when gambling is one where neither side has an advantage.
Casinos do not run fair games, or they would not be viable businesses. If I
were a gambler, you could propose paying me $1.90 rather than $2 if I end
up in Washington. You would then make $0.10 with certainty, while I would
get a better payout ratio than most gambling establishments give.

The philosophical significance of all this is that one can calculate the
probabilities and make rational decisions, which is not possible with the
"the question is gibberish" method.

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: A profound lack of profundity

2017-08-20 Thread Stathis Papaioannou
On Sat, 19 Aug 2017 at 6:20 am, John Clark <johnkcl...@gmail.com> wrote:

> On Fri, Aug 18, 2017 at 1:48 PM, Stathis Papaioannou <stath...@gmail.com>
> wrote:
>
> ​>> ​
>>> Today before the duplication there is only one
>>> ​"I"​
>>>  but tomorrow after the duplication there will be two, therefore
>>> ​it​
>>>  is not a question
>>> ​,​
>>> it is just a sequence of words that follow the rules of grammar
>>> correctly but mean nothing.
>>> ​And "How many cities will I be in tomorrow?" is not a question either. ​
>>>
>>>
>>
>> ​> ​
>> The language can cope with one "I" becoming two. The question is
>> understandable and has a definite answer.
>>
>
> ​You've said that many many times but strangely you've never given a definite
> answer as to what the name of that one and only one city actually turned
> out to be. Why is that?
>
>
> ​
>>> ​>> ​
>>> Explain what "bet" the personal pronoun "you" c​an make today that will
>>> economically benafit the
>>> person
>>> ​al​
>>> pronoun "you"
>>> ​ tomorrow. ​And just as important explain what sort of rational agent
>>> would cover that "bet". Who is on the other side of the "bet"?
>>>
>>
>> ​> ​
>> I who am about to be duplicated bet $1 that I will end up in Washington.
>> You, who will not be duplicated, are the counterparty to this bet.
>>
>
> ​John Clark would never take that bet unless the "I" were replaced by
> "Bruno Marchal" or if "I" were specified as being any intelligent being
> that remembers making the bet.  ​
>
>
>
>> ​> ​
>> I am duplicated along with the $1 in my pocket.
>> ​ ​
>>
>
> ​If money can be duplicated too then the men and their pockets seem like
> unnecessary fifth wheels in all this.  ​
>
>
>
>> ​> ​
>> The Washington copy and the Moscow copy of me each give you $1.
>>
>
> ​Why would the Washington copy give me anything? The Washington man, to
> absolutely nobody's surprise, did see Washington so I give him $1.
>
>
>> ​> ​
>> You give the Washington copy $2
>>
>
> ​You said the bet was for ​$1.
>
>
>> ​> ​
>> and the Moscow copy nothing.
>>
>
> ​The Moscow copy should give me $1 because the Moscow man, to absolutely
> ​nobody's surprise, did NOT see Washington.
>
> ​So from the very start I ​know I will give $1 to W and M will give $1 to
> me? Why would I or any serious person accept this "bet"?
>
> ​> ​
>> This interesting thing about this bet is that you are not really
>> gambling, because you know exactly what you will get, and since it is a
>> fair bet you will have no net gain or loss.
>>
>
> ​Then it's not a bet, ​I don't know what it is but it's pretty silly.
>
>
>
>> ​> ​
>> I, on the other hand, expect to either get double or nothing,
>>
>
> ​What Mr. I ​expects is of no interest , what will happen is, and that is
> Bruno Marchal will get get double or nothing.​
>
> ​ So there is no way ​
> Bruno Marchal
> ​ can ​
> gain or lose
> ​ by making the "bet" and no way for John Clark to ​
> gain or lose
> ​ by accepting that "bet".
>
>
>> ​> ​
>> and I don't know which.
>>
>
> You don't know now and you will never know nor will anybody else because
> nobody knows which ONE is I after ONE I uses a I duplicating machine and
> becomes TWO.​
>
>

Let me explain the bet more clearly. I will be duplicated tomorrow in
Moscow and Washington. I have $1 in my pocket, and this will be duplicated
with me. (Assume this is legal provided that the money is duplicated along
with the owner). The bet I make is that after the duplication, I will pay
you $1 and you (who will not be duplicated) will pay me $2 if I am in
Washington, and nothing if I am in Moscow.

The "I" here, to spell it out in case you are confused, is anyone who
remembers being me and making the bet. So today "I" am just one person in
Helsinki, but tomorrow there will be two people, one in Moscow and one in
Washington, each of whom states "I made this bet yesterday".

This is a fair game, in the sense that it is not biased towards either
side. For you there is no net gain or loss, for me I either double my money
or get nothing. If the bet is changed so that I get a $3 payout if I am the
Washington copy then it is biased in my favour, if it is changed so that I
get $1.50 if I am the Washington copy then it is biased in your favour, and
a rational agent would use this to decide whether to bet or not.

We can understand the bet, agree on when there will be a payout and who
will get it, calculate whether one or other side is advantaged and decide
whether to make the bet. This is not consistent with the bet being based on
"nonsense".

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: A profound lack of profundity

2017-08-18 Thread Stathis Papaioannou
On Sat, 19 Aug 2017 at 1:30 am, John Clark <johnkcl...@gmail.com> wrote:

> On Thu, Aug 17, 2017 at 12:00 AM, Stathis Papaioannou <stath...@gmail.com>
> wrote:
>
> ​> ​
>> There is a problem with asking "which one place will John Clark be in
>> tomorrow", because there will be two of them, in two different places.
>>
>
> ​True. Today before the duplication there is only one "John Clark" but
> tomorrow after the duplication there will be two, therefore it is not a
> question, it is just a sequence of words that follow the rules of grammar
> correctly but mean nothing. However "How many cities will John Clark be in
> tomorrow?" is a real question that has a real answer, and its two.
>
> ​> ​
>> There is no such problem with asking "which one place will I be in
>> tomorrow".
>
>
> ​False. ​
> Today before the duplication there is only one
> ​"I"​
>  but tomorrow after the duplication there will be two, therefore
> ​it​
>  is not a question
> ​,​
> it is just a sequence of words that follow the rules of grammar correctly
> but mean nothing.
> ​And "How many cities will I be in tomorrow?" is not a question either. ​
>
>

The language can cope with one "I" becoming two. The question is
understandable and has a definite answer.

​> ​
>> The original John Clark would be wrong if he bet
>>
> ​ [...]​
>>
>
> ​THERE IS NO BET AND NEVER WAS! A bet needs a way to be resolved​
>
> ​and this has none. Tomorrow nobody will be any wiser than they are today.
> You can't build a grand philosophical construction on a foundation of sand
> and this isn't even sand, at least the word "sand" means something.  ​
>
>
>> ​> ​
>> The Washington John Clark will be right if the original bet "I will be in
>> Washington"
>>
>
> ​Only if "I" means a person who remembers making the "bet", and if it
> doesn't mean that then what the hell does that personal pronoun mean?  ​
>
> ​If that is what it means then "I"  will be a winner AND "I" will also be
> a loser. ​And so it's not a bet.
>

The two copies will each remember making a bet, and one will win while the
other will lose. This seems quite simple and obvious; what do you think is
wrong with it?

​> ​
>> We can understand the question, specify what would constitute a right or
>> wrong answer, make economic decisions based on the answer
>>
>
> ​Explain what "bet" the personal pronoun "you" c​an make today that will
> economically benafit the
> person
> ​al​
> pronoun "you"
> ​ tomorrow. ​And just as important explain what sort of rational agent
> would cover that "bet". Who is on the other side of the "bet"?
>

I who am about to be duplicated bet $1 that I will end up in Washington.
You, who will not be duplicated, are the counterparty to this bet. I am
duplicated along with the $1 in my pocket. The Washington copy and the
Moscow copy of me each give you $1. You give the Washington copy $2 and the
Moscow copy nothing. This interesting thing about this bet is that you are
not really gambling, because you know exactly what you will get, and since
it is a fair bet you will have no net gain or loss. I, on the other hand,
expect to either get double or nothing, and I don't know which. If the vet
is not fair then rationally either you or I will have a reason to
participate or not participate. If the payout is $3 I should make the bet
and you should not, if it is $1.50 you should make the bet and I should not.

​> ​
>> demonstrate that even rats have an instinctive understanding of the
>> question;
>>
>
> ​I went into that in some detail just a few days ago and you did not
> respond, I'm not going to do it again.​
>
>

If you are referring to the fact that some of the rat copies will not get
any reward, that is the same in the equivalent experiment without copying.
If the rat has a 1/2 chance of a reward, after 10 trials there is a 1/2^10
probability that the rat will still not get a reward, so that rat won't
learn the behaviour, but in a multi-rat experiment most of the rats will.

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: A profound lack of profundity

2017-08-16 Thread Stathis Papaioannou
On Thu, 17 Aug 2017 at 10:39 am, John Clark <johnkcl...@gmail.com> wrote:

> On Wed, Aug 16, 2017 at 7:04 PM, Stathis Papaioannou <stath...@gmail.com>
> wrote:
>
> ​> ​
>> The difference between the past and the future in a deterministic
>> multiverse is that for an observer inside it the past is known but the
>> future is uncertain. Although it uncertain, it can be guessed at or
>> calculated using probability theory. If it can be guessed at or calculated,
>> questions about it are not "meaningless",
>>
>
> ​Predictions are meaningless if nobody knows what is being predicted, and
> that's exactly what happens when personal pronouns and not proper nouns are
> used in a world with personal pronoun duplicating machines. Even the very
> concept of probability itself becomes meaningless if its impossible for
> anyone to EVER  know if the event the probability refers to happened or
> not, and that is also the case with Bruno's thought experiment.
>

There is a problem with asking "which one place will John Clark be in
tomorrow", because there will be two of them, in two different places. The
original John Clark would be wrong if he bet "John Clark will be only in
Washington", and wrong if he bet "John Clark will be only in Moscow".
(However, even this does not make the question meaningless, since we can
still understand it and why it is pronlematic).

There is no such problem with asking "which one place will I be in
tomorrow". The Washington John Clark will be right if the original bet "I
will be in Washington" and the Moscow John Clark will be wrong.

> We can understand the question, specify what would constitute a right or
wrong answer, make economic decisions based on the answer, demonstrate that
even rats have an instinctive understanding of the question; what more do
you require to make the question meaningful?
-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: A profound lack of profundity

2017-08-16 Thread Stathis Papaioannou
On Wed, 16 Aug 2017 at 3:26 am, John Clark <johnkcl...@gmail.com> wrote:

> On Mon, Aug 14, 2017 at 9:31 PM, Stathis Papaioannou <stath...@gmail.com>
> wrote:
>
> ​
>>> ​>>​
>>> a rat can remember the past and a rat can use induction to make a
>>> prediction, and most important of all a rat knows if it's prediction turned
>>> out to be correct or not and that enables the rat to improve its induction
>>> process for Mr. Rat's next prediction. But if Mr. I, who is about to enter
>>> a "I" duplicating chamber, asks the question "Will I see Moscow tomorrow?"
>>> the only answer Mr. I will ever get is "yes and no", and that is not an
>>> answer so that was not a question.
>>>
>>
>> ​> ​
>> If even the rat can understand it at a primitive level (as demonstrated
>> by its behaviour) then I think this goes against your claim that the
>> question is meaningless.
>>
>
> ​It's very meaningful when looking from the present back into the past,
> but NOT when looking from the present toward the future. ​
>
> People around here seem to think you can treat the future the same way you
> treat the past but you can't, if you could then you couldn't tell the
> difference between the past and the future, but you can.​
>
>

The difference between the past and the future in a deterministic
multiverse is that for an observer inside it the past is known but the
future is uncertain. Although it uncertain, it can be guessed at or
calculated using probability theory. If it can be guessed at or calculated,
questions about it are not "meaningless", in the sense most people would
use the word.

​> ​
>> And I think that if you went through the duplication a few times your
>> copies would start to behave as if questions about their future were
>> meaningful.
>>
>
> ​If you send the rats through the duplicator 10 times you'll end up with
> 2^10 or 1024 rats. ​All 1024 rats will have seen different things from each
> other and thus have different memories, and thus formed different inductive
> rules, and thus will behave differently in the future.
>



And all 1024 rats no matter how different their individual situation may be
> now will remember a single unbroken chain of events going all the way back
> to that single original rat. But the question wasn't about any member of
> that rat pack, the question was asked 10 days and 10 duplications ago about
> the single original rat:
>
> *What ONE thing will the ONE rat see in the future after the ONE rat
> becomes 1024 rats?*
>
> That can't be answered and it's not because the answer is unknown
> ​,​
> ​its​
>  because the answer does not exist and never will. All answers need a
> corresponding question and despite its conspicuous question mark the above
> is not a question, it's not even a stupid question, so there can't be an
> answer to it.
>
> John K Clark
>
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>
-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: A profound lack of profundity

2017-08-14 Thread Stathis Papaioannou
On Tue, 15 Aug 2017 at 9:09 am, John Clark <johnkcl...@gmail.com> wrote:

> On Mon, Aug 14, 2017  Stathis Papaioannou <stath...@gmail.com> wrote:
>
> ​> ​
>> By their behaviour, rats show an operational understanding of
>> probability.
>
>
> ​That because a rat can remember the past and a rat can use induction to
> make a prediction, and most important of all a rat knows if it's prediction
> turned out to be correct or not and that enables the rat to improve its
> induction process for Mr. Rat's next prediction. But if Mr. I, who is about
> to enter a "I" duplicating chamber, asks the question "Will I see Moscow
> tomorrow?" the only answer Mr. I will ever get is "yes and no", and that is
> not an answer so that was not a question.
>

If even the rat can understand it at a primitive level (as demonstrated by
its behaviour) then I think this goes against your claim that the question
is meaningless. And I think that if you went through the duplication a few
times your copies would start to behave as if questions about their future
were meaningful.

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: A profound lack of profundity

2017-08-14 Thread Stathis Papaioannou
> On 8/14/2017 10:40 AM, Bruno Marchal wrote:
>
>
> If after a rat has been duplicated the 2 rats then have different
> experiences, such as one getting a electric shock and one not getting one,
> then they will no longer be identical and will behave ​differently in the
> future. I see no indeterminacy or mystery
> ​or deep philosophy ​
> in any of this.
>
>
> The rat can't see the indeterminacy, because we can't explain to the rat
> the protocol.
>
> By their behaviour, rats show an operational understanding of probability.
> The rat can cut through spurious philosophical argument, such as the claim
> that making predictions in duplication experiments is gibberish.
>
-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: A profound lack of profundity

2017-08-14 Thread Stathis Papaioannou
On Tue, 15 Aug 2017 at 2:52 am, Brent Meeker <meeke...@verizon.net> wrote:

>
>
> On 8/14/2017 6:12 AM, Stathis Papaioannou wrote:
>
>
> On Mon, 14 Aug 2017 at 10:31 pm, Bruce Kellett <bhkell...@optusnet.com.au>
> wrote:
>
>> On 14/08/2017 4:20 pm, Stathis Papaioannou wrote:
>>
>> On Mon, 14 Aug 2017 at 3:08 pm, Bruce Kellett <bhkell...@optusnet.com.au>
>> wrote:
>>
>>> On 14/08/2017 2:32 pm, Stathis Papaioannou wrote:
>>>
>>> On 14 August 2017 at 14:15, Bruce Kellett <bhkell...@optusnet.com.au>
>>> wrote:
>>>
>>> The point, as I see it, is that if, after duplication, the copies can
>>>> communicate, and they agree that they both have psychological continuity
>>>> with the original person, and that, consequently, the original person saw
>>>> both cities/results.
>>>>
>>>
>>> If they could communicate after duplication, they would agree that they
>>> both have psychological continuity with the original person, but why would
>>> they agree that the original person saw both cities or results? This seems
>>> to be the same point if dispute. I would feel exactly the same whether my
>>> copy was in the next street, the next galaxy or the next universe, and I
>>> would have exactly the same expectations about the future if I were to
>>> undergo duplication again.
>>>
>>>
>>>
>>> That is only the case if you choose to ignore a large part of the
>>> evidence available to you. In exclusively concentrating on a closed 1p view
>>> after duplication, you are inconsistent with the fact that you used 3p
>>> information about the protocol before duplication. Why refuse to use this
>>> 3p (or 2p if you talk to you doppelganger in person) information after
>>> duplication?
>>>
>>
>> By what process would the 3p information make me feel differently about
>> myself?
>>
>>
>> I don't know. How does any knowledge from outside make you feel
>> differently about yourself?
>>
>
> New information may shock me, but it can't literally make me feel like a
> different person - replace my past with a different past, make me think
> that I'm in a different place doing different things with a different
> identity.
>
>
> Even if the information was that you have a duplicate from which you have
> only momentarily been separated and you will soon be merged again?
>

Yes, if I knew that I would anticipate the merging but feel the same until
it actually happened.

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: A profound lack of profundity

2017-08-14 Thread Stathis Papaioannou
On Mon, 14 Aug 2017 at 10:31 pm, Bruce Kellett <bhkell...@optusnet.com.au>
wrote:

> On 14/08/2017 4:20 pm, Stathis Papaioannou wrote:
>
> On Mon, 14 Aug 2017 at 3:08 pm, Bruce Kellett <
> <bhkell...@optusnet.com.au>bhkell...@optusnet.com.au> wrote:
>
>> On 14/08/2017 2:32 pm, Stathis Papaioannou wrote:
>>
>> On 14 August 2017 at 14:15, Bruce Kellett <bhkell...@optusnet.com.au>
>> wrote:
>>
>> The point, as I see it, is that if, after duplication, the copies can
>>> communicate, and they agree that they both have psychological continuity
>>> with the original person, and that, consequently, the original person saw
>>> both cities/results.
>>>
>>
>> If they could communicate after duplication, they would agree that they
>> both have psychological continuity with the original person, but why would
>> they agree that the original person saw both cities or results? This seems
>> to be the same point if dispute. I would feel exactly the same whether my
>> copy was in the next street, the next galaxy or the next universe, and I
>> would have exactly the same expectations about the future if I were to
>> undergo duplication again.
>>
>>
>>
>> That is only the case if you choose to ignore a large part of the
>> evidence available to you. In exclusively concentrating on a closed 1p view
>> after duplication, you are inconsistent with the fact that you used 3p
>> information about the protocol before duplication. Why refuse to use this
>> 3p (or 2p if you talk to you doppelganger in person) information after
>> duplication?
>>
>
> By what process would the 3p information make me feel differently about
> myself?
>
>
> I don't know. How does any knowledge from outside make you feel
> differently about yourself?
>

New information may shock me, but it can't literally make me feel like a
different person - replace my past with a different past, make me think
that I'm in a different place doing different things with a different
identity.

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: A profound lack of profundity

2017-08-14 Thread Stathis Papaioannou
On Mon, 14 Aug 2017 at 3:08 pm, Bruce Kellett <bhkell...@optusnet.com.au>
wrote:

> On 14/08/2017 2:32 pm, Stathis Papaioannou wrote:
>
> On 14 August 2017 at 14:15, Bruce Kellett <bhkell...@optusnet.com.au>
> wrote:
>
> The point, as I see it, is that if, after duplication, the copies can
>> communicate, and they agree that they both have psychological continuity
>> with the original person, and that, consequently, the original person saw
>> both cities/results.
>>
>
> If they could communicate after duplication, they would agree that they
> both have psychological continuity with the original person, but why would
> they agree that the original person saw both cities or results? This seems
> to be the same point if dispute. I would feel exactly the same whether my
> copy was in the next street, the next galaxy or the next universe, and I
> would have exactly the same expectations about the future if I were to
> undergo duplication again.
>
>
>
> That is only the case if you choose to ignore a large part of the evidence
> available to you. In exclusively concentrating on a closed 1p view after
> duplication, you are inconsistent with the fact that you used 3p
> information about the protocol before duplication. Why refuse to use this
> 3p (or 2p if you talk to you doppelganger in person) information after
> duplication?
>

By what process would the 3p information make me feel differently about
myself?

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: A profound lack of profundity

2017-08-13 Thread Stathis Papaioannou
On 14 August 2017 at 14:15, Bruce Kellett <bhkell...@optusnet.com.au> wrote:

The point, as I see it, is that if, after duplication, the copies can
> communicate, and they agree that they both have psychological continuity
> with the original person, and that, consequently, the original person saw
> both cities/results.
>

If they could communicate after duplication, they would agree that they
both have psychological continuity with the original person, but why would
they agree that the original person saw both cities or results? This seems
to be the same point if dispute. I would feel exactly the same whether my
copy was in the next street, the next galaxy or the next universe, and I
would have exactly the same expectations about the future if I were to
undergo duplication again.



-- 
Stathis Papaioannou
-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: A profound lack of profundity

2017-08-13 Thread Stathis Papaioannou
On Mon, 14 Aug 2017 at 11:30 am, Bruce Kellett <bhkell...@optusnet.com.au>
wrote:

> On 14/08/2017 11:19 am, Stathis Papaioannou wrote:
>
> On Mon, 14 Aug 2017 at 10:30 am, Bruce Kellett <
> <bhkell...@optusnet.com.au>bhkell...@optusnet.com.au> wrote:
>
>> On 14/08/2017 2:51 am, Stathis Papaioannou wrote:
>>
>> On Sun, 13 Aug 2017 at 9:38 pm, Bruce Kellett <
>> <bhkell...@optusnet.com.au>bhkell...@optusnet.com.au> wrote:
>>
>>>
>>> I think the problem I see is in the insistence that one restrict the
>>> subjects of the duplication to first person knowledge. Their knowledge of
>>> the protocol cannot be purely 1p -- there has to be a 3p component in that
>>> they are told the set up, and they have sufficient background 3p knowledge
>>> to trust the operator, etc. Then, after duplication, they also have access
>>> to 3p knowledge about both duplicates -- they can arrange to communicate,
>>> for example. So they can easily become aware of the fact that the person
>>> that remembers being Helsinki man sees both Moscow and Washington. My point
>>> here is that if you restrict them to 1p knowledge after the duplication,
>>> you must, in order to be consistent, restrict them to just 1p knowledge
>>> before the experiment; in which case they are necessarily unaware of the
>>> details of the protocol and will have a different perception of what has
>>> happened.
>>>
>>> In the case of restriction to 1p knowledge the situation becomes much
>>> more analogous to what happens in QM where experiments might have multiple
>>> outcomes. In that case there is no possibility of communication between the
>>> different branches of the wave function, so there is genuine uncertainty
>>> about outcomes, and probabilities are estimated from limiting relative
>>> frequencies in the usual way. If one derives and/or applies the Born Rule
>>> in QM, then one can assign low probabilities to untypical sequences of
>>> results and the like. If you mix 1p and 3p knowledge in the duplication
>>> scenario, you lose this parallel with QM because the analogous 3p knowledge
>>> is not available in QM.
>>>
>>
>> If someone believes the MWI is true, then he is aware of the protocol and
>> trusts the operator. In duplication experiments there is no logical reason
>> why the copies could not be kept ignorant of each other
>>
>>
>> And there is no logical reason that prevents them from arranging
>> beforehand to communicate after the experiment -- in Helsinki, I could
>> decide to post my subsequent location to Facebook, and communicate with
>> other similar posts.
>>
>
> But if they were prevented from communicating would it make any
> fundamental difference to the experiment?
>
> and there is no logical reason why copies in the MWI can't see what each
>> other is doing.
>>
>> Such inter-branch communication in MWI is physically impossible. This is
>> the main reason why person duplication experiments can never emulate QM,
>> MWI or not.
>>
>
> It is physically impossible, but what fundamental difference would it make
> if you could communicate with a copy in a parallel world who diverged from
> you a while ago? Would you suddenly feel that you weren't you, or that you
> were in two places at once?
>
>
> The ability to communicate, or the physical impossibility of such
> communication, is the fundamental difference between the duplication
> scenario and quantum MWI. It changes the probabilities: just think of
> duplication of the apparatus in a spin measurement experiment without
> simultaneous duplication of the experimenter -- then it is clear that I get
> both spin up and spin down, in my laboratory, in front of my eyes. This is
> not possible in MWI since the branches are, by definition, non-interacting.
>

The equivalent examples would be if the experimenter along with the lab and
the apparatus were duplicated, with one experimenter seeing spin up and the
other spin down. What difference would it then make if the experimenters,
now two of them, walked down the road to see each other, or if they were
prevented from doing so?

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: A profound lack of profundity

2017-08-13 Thread Stathis Papaioannou
On Mon, 14 Aug 2017 at 10:30 am, Bruce Kellett <bhkell...@optusnet.com.au>
wrote:

> On 14/08/2017 2:51 am, Stathis Papaioannou wrote:
>
> On Sun, 13 Aug 2017 at 9:38 pm, Bruce Kellett <
> <bhkell...@optusnet.com.au>bhkell...@optusnet.com.au> wrote:
>
>>
>> I think the problem I see is in the insistence that one restrict the
>> subjects of the duplication to first person knowledge. Their knowledge of
>> the protocol cannot be purely 1p -- there has to be a 3p component in that
>> they are told the set up, and they have sufficient background 3p knowledge
>> to trust the operator, etc. Then, after duplication, they also have access
>> to 3p knowledge about both duplicates -- they can arrange to communicate,
>> for example. So they can easily become aware of the fact that the person
>> that remembers being Helsinki man sees both Moscow and Washington. My point
>> here is that if you restrict them to 1p knowledge after the duplication,
>> you must, in order to be consistent, restrict them to just 1p knowledge
>> before the experiment; in which case they are necessarily unaware of the
>> details of the protocol and will have a different perception of what has
>> happened.
>>
>> In the case of restriction to 1p knowledge the situation becomes much
>> more analogous to what happens in QM where experiments might have multiple
>> outcomes. In that case there is no possibility of communication between the
>> different branches of the wave function, so there is genuine uncertainty
>> about outcomes, and probabilities are estimated from limiting relative
>> frequencies in the usual way. If one derives and/or applies the Born Rule
>> in QM, then one can assign low probabilities to untypical sequences of
>> results and the like. If you mix 1p and 3p knowledge in the duplication
>> scenario, you lose this parallel with QM because the analogous 3p knowledge
>> is not available in QM.
>>
>
> If someone believes the MWI is true, then he is aware of the protocol and
> trusts the operator. In duplication experiments there is no logical reason
> why the copies could not be kept ignorant of each other
>
>
> And there is no logical reason that prevents them from arranging
> beforehand to communicate after the experiment -- in Helsinki, I could
> decide to post my subsequent location to Facebook, and communicate with
> other similar posts.
>

But if they were prevented from communicating would it make any fundamental
difference to the experiment?

and there is no logical reason why copies in the MWI can't see what each
> other is doing.
>
> Such inter-branch communication in MWI is physically impossible. This is
> the main reason why person duplication experiments can never emulate QM,
> MWI or not.
>

It is physically impossible, but what fundamental difference would it make
if you could communicate with a copy in a parallel world who diverged from
you a while ago? Would you suddenly feel that you weren't you, or that you
were in two places at once?

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: A profound lack of profundity

2017-08-13 Thread Stathis Papaioannou
On Mon, 14 Aug 2017 at 7:59 am, John Clark <johnkcl...@gmail.com> wrote:

> On Sun, Aug 13, 2017  PM, Stathis Papaioannou <stath...@gmail.com> wrote:
>
> ​> ​
>> After duplication, the copies will not claim to be the same person any
>> more,
>>
>
> ​True but both will claim they are the "I
> ' who yesterday asked the question "What city will I see?"​
>
> ​. Do you think maybe just maybe that could cause a wee bit of confusion
> and when this sort of thing becomes commonplace the rules on
> English shouldn't continue on in the same old way they always have?  ​
>

That both refer to themselves as "I" and claim have entered the duplicator
in Helsinki yesterday and asked "What city will I see?" is not
grammatically confusing at all. The problem is with proper nouns, not
pronouns. Legally it could be problematic, due to disputes over property,
for example, but the English language will cope.

​> ​
>> But they will each correctly refer to themselves as "I",
>>
>
> ​Yes​
>
> ​the TWO will ​
> refer to themselves as "I"
> ​, and the ONE before the duplication that both of the TWO remember being
> will also referred to himself as "I". And that is why you can bet your
> bottom dollar the English language will radically change the way it uses
> personal pronouns the day the engineering difficulties are overcome and "I"
> duplicating machines become practical.   ​
>

Multiple people today refer to themselves as "I" and there is no problem.

​> ​
>> You agree that after the duplication one will say he was right and the
>> other will say he was wrong, which is an answer
>>
>
> ​The only one who can answer ​that question is Mr. I,  and Mr. I says the
> answer is yes and no. Shady politicians may say yes and no is an answer but
> I don't.
>
>

The W copy will say "yesterday I predicted I would be in W today, and I was
right". The M copy will say "yesterday I predicted I would be in W today,
and I was wrong". I think you are the only person who would claim to have a
problem understanding this.

​>> ​
>>> ​Yes, and
>>> BOTH are "I:
>>>
>>> ​. And all this is 100% predictable. ​
>>>
>>
>> ​> ​>
>> But not to the copies
>>
>
> ​Of course the copies couldn't have predicted what city they will see
> before the duplication, ​they didn't exist then!!
>
>

My tomorrow self doesn't exist yet, does this mean there is no point in
planning anything for tomorrow?

​> ​
>> because it will seem to them that they either got lucky or got unlucky
>> with the answer.
>>
>
> ​If the Moscow man is surprised to see Moscow, or after seeing Moscow he
> is surprised to be informed that he is the Moscow man ​then the Moscow man
> isn't very bright.
>
>

He remembers being uncertain yesterday whether he would see Moscow or
Washington, and was hoping it would be Washington because that is what he
bet on and now obviously has lost the bet. Your claim that he is not really
the same person the original and that the bet was gibberish does not make
him think differently or behave differently he next time he is facing
duplication.

​> ​
>> Everyone watching knows exactly what will happen, the subject prior to
>> duplication knows intellectually exactly what will happen
>>
>
> ​Yes.​
>
>
>
>> ​> ​
>> but the subject nevertheless has a sense of uncertainty because he feels
>> he will
>> ​
>> end up in one or other city, but not both.
>>
>
> What a subject "feels​" depends entirely on the emotional makeup of the
> specific subject, no doubt some will feel they will end up in
> Santa Claus's workshop
> ​, but science is about what will happen not what some
> hillbilly
> ​ thinks will happen.
>  ​And by the way, nothing will happen to *THE* subject, something will
> happen to TWO subjects.
>

An intelligent subject who trusts the experimental setup knows that he will
end up in one or other city but not both, and not in Santa Claus's
workshop. A less intelligent subject, such as a rat, will figure this out
and set his expectations for the future based on his memory of going
through the duplicator in the past. Even the copies of stubborn old John
Clark will come to this conclusion.

>

-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Re: A profound lack of profundity

2017-08-13 Thread Stathis Papaioannou
On Mon, 14 Aug 2017 at 5:38 am, Brent Meeker <meeke...@verizon.net> wrote:

> On 8/12/2017 3:58 PM, Bruce Kellett wrote:
>
>
> You try to help John C., but you contradict his "theory" (which is indeed
> based on the 1p/3p confusion).
>
>
> I suggest that the whole of step 3 is based on a 1p/3p confusion. If the
> duplicated subject does not have 3p knowledge of the protocol, he will
> never be aware of being duplicated. In fact, he can never get first person
> knowledge of that duplication, even if he is, in fact, duplicated.
>
>
> Let's examine that a bit.  Suppose I've created an AI.  Could this AI
> experience "being in Moscow *and* being in Washington".  I think so.  I
> simply provide duplicate sets of sensors, visual, audio, temperature, etc
> in both M and W.  Now suppose the AI consists of two computers
> synchronously executing the same AI routine using the same sensory inputs,
> and this AI is connected to sensors in Helsinki.  Now I switch the sensors
> to those in M *and* W.  The AI experience M *and* W.  But suppose that
> instead I switch one of the computers to the M sensors and the other to the
> W sensors.  You ask the AI, when still connected to Helsinki, to bet on
> whether it will experience M *xor* W.  Is there a right answer?
>
>
> Brent
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>
-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: A profound lack of profundity

2017-08-13 Thread Stathis Papaioannou
On Sun, 13 Aug 2017 at 9:38 pm, Bruce Kellett <bhkell...@optusnet.com.au>
wrote:

> On 13/08/2017 6:00 pm, Stathis Papaioannou wrote:
>
> On 13 August 2017 at 16:48, Bruce Kellett <bhkell...@optusnet.com.au>
> wrote:
>
>> On 13/08/2017 10:01 am, Stathis Papaioannou wrote:
>>
>> On Sun, 13 Aug 2017 at 9:19 am, Bruce Kellett <bhkell...@optusnet.com.au>
>> wrote:
>>
>>> On 13/08/2017 9:05 am, Stathis Papaioannou wrote:
>>>
>>> On 13 August 2017 at 08:48, Bruce Kellett < <bhkell...@optusnet.com.au>
>>> bhkell...@optusnet.com.au> wrote:
>>>
>>>> On 13/08/2017 12:04 am, Stathis Papaioannou wrote:
>>>>
>>>> On Sat, 12 Aug 2017 at 4:52 pm, Bruce Kellett <
>>>> <bhkell...@optusnet.com.au>bhkell...@optusnet.com.au> wrote:
>>>>
>>>>> On 12/08/2017 1:42 pm, Stathis Papaioannou wrote:
>>>>>
>>>>>
>>>>> First person experience is individual and private. The third person
>>>>> point of view is the view of an external observer. Suppose person A is
>>>>> observed laughing by person B. The behaviour - the laughing - can be
>>>>> observed by anyone; this is the third person point of view. Person A might
>>>>> be experiencing happiness or amusement; this is the first person point of
>>>>> view and only person A himself has it. Finally, person B has visual and
>>>>> auditory experiences and knowledge of the outside world (there are 
>>>>> laughing
>>>>> entities in it), and this is again from the first person point of view. I
>>>>> would say that knowledge is a type of experience, and therefore always
>>>>> first person and private; information is that which is third person
>>>>> communicable. But perhaps this last point is a matter of semantics.
>>>>>
>>>>>
>>>>> If your knowledge is gained from someone else, it is necessarily
>>>>> communicable information, and thus third person. First person is your
>>>>> personal experience, which is not communicable. However, knowledge gained
>>>>> by experience is communicable, and thus third person. Otherwise, all that
>>>>> you say above is mere logic chopping.
>>>>>
>>>>
>>>> Most first person experiences are based on third person information,
>>>> namely sensory data.
>>>>
>>>>
>>>> How is sensory data 'third person information'? That would make
>>>> everything 3p, and you have eliminated the first person POV. If I
>>>> experience the pleasure of sitting in the sun on a fine spring morning,
>>>> that is surely a first person experience, and entirely sensory in origin.
>>>>
>>>> Even a priori knowledge, such mathematical knowledge, starts with
>>>> learning about the subjectvfrom outside sources.
>>>>
>>>> Returning to the point, why were you claiming that the subject on a
>>>> duplication experiment cannot have first person knowledge of duplication?
>>>> That would mean no-one could ever have first person knowledge of anything.
>>>>
>>>>
>>>> If you go into the duplicating machine without being told what it is,
>>>> then you are duplicated and come out in Moscow, you will know that you have
>>>> been transported from Helsinki, but how can you know anything about any
>>>> duplicates? As far as you know -- not knowing the protocol -- you could
>>>> simply have been rendered unconscious and flown to Moscow. How does 1p
>>>> experience tell the difference?
>>>>
>>>> This is why I think some 3p is being mixed in with 1p experiences in
>>>> this duplication protocol. The subject only knows the protocol by being
>>>> told about it. How does he know he is not being lied to?
>>>>
>>>
>>> This is the case with any experience whatsoever: you come to a
>>> conclusion about what has happened based on your observations and
>>> deductions, but you could be mistaken.
>>>
>>>
>>> That would appear to put a large hole in Bruno's distinction between
>>> quanta and qualia. The sensation of the sun on my face is veridicial -- I
>>> might be mistaken about it being the sun, but the sensation is
>>> incontrovertible. But things that I am told about are in a different
>>> category -- I have no immediate incontrovertible experience associate

Re: A profound lack of profundity

2017-08-13 Thread Stathis Papaioannou
On Mon, 14 Aug 2017 at 1:56 am, John Clark <johnkcl...@gmail.com> wrote:

> On Sat, Aug 12, 2017 at 11:29 PM, Stathis Papaioannou <stath...@gmail.com>
> wrote:
>
> ​
>>> ​>> ​
>>> Pronouns work fine today because nobody has yet made a "I" duplicating
>>> machine, but when they do the English language is going to need a massive
>>> overhaul.
>>>
>>
>> ​> ​
>> There are already billions of "I's" in the world without such a
>> duplicating machine, and no-one is confused by the large number of them.
>>
>
> ​
> That's because none of the
> ​ ​
> billions of "I's" in the world
> ​ ​
> right now claim to be the same I.
>

After duplication, the copies will not claim to be the same person any
more, because there are obviously two of them. But they will each correctly
refer to themselves as "I", and talk about their shared past before the
duplication.

The fact that pronouns seldom cause confusion today (except on this list)
> isn't due to some physical law or deep philosophical principle, its simply
> due to a temporary lack of technological prowess that limits the options on
> what we can do. It's the same reason we don't today have 600
> ​ ​
> mph trains crossing continents.
> ​ ​
> There is no physical mathematical or philosophical reason this
> insufficient engineering capability
> ​ ​
> will continue forever, or even until the end of this century. The times
> are changing.
>
> ​> ​
>> If the subject predicts, prior to duplication, "I will see W" then
>>
> ​ [...]
>>
>
> ​Then will that "prediction" turn out to be correct?​
>
> ​After the duplication one I will say yes and the other I will say no. So
> the answer to the question isn't just unknown ​
> prior to duplication​
> ​ it *doesn't exist*, and the answer doesn't exist after the duplication
> either. The answer will NEVER exist. So it's not a question.
>

You agree that after the duplication one will say he was right and the
other will say he was wrong, which is an answer, and an easily verifiable
one, so why do you say the answer does not exist?

​> ​
>> one copy will be correct and the other copy will be wrong.
>>
>
> ​Yes, and
> BOTH are "I:
>
> ​. And all this is 100% predictable. ​
>

But not to the copies, because it will seem to them that they either got
lucky or got unlucky with the answer. Everyone watching knows exactly what
will happen, the subject prior to duplication knows intellectually exactly
what will happen, but the subject nevertheless has a sense of uncertainty
because he feels he will end up in one or other city, but not both. It is
this subjective sense of uncertainty despite knowing exactly what will
happen objectively that is the first person indeterminacy. Perhaps you can
see this, but your mind rebels at the thought of it, driving you to call it
"gibberish" where others might use a different word such as "paradoxical".

​> ​
>> If the subject predicts, prior to duplication,  "SP1 will see W" then
>> both copies will be correct.
>>
>
> ​True because tautologies are always true; SP1 means the
> Stathis Papaioannou
> ​ that will see W.  And all this is easily predictable, so who's going to
> be stupid enough to take their ​bet? And where is this indeterminacy I keep
> hearing about.
>
>  John K Clark
>
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>
-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: A profound lack of profundity

2017-08-13 Thread Stathis Papaioannou
On 13 August 2017 at 16:48, Bruce Kellett <bhkell...@optusnet.com.au> wrote:

> On 13/08/2017 10:01 am, Stathis Papaioannou wrote:
>
> On Sun, 13 Aug 2017 at 9:19 am, Bruce Kellett <
> <bhkell...@optusnet.com.au>bhkell...@optusnet.com.au> wrote:
>
>> On 13/08/2017 9:05 am, Stathis Papaioannou wrote:
>>
>> On 13 August 2017 at 08:48, Bruce Kellett < <bhkell...@optusnet.com.au>
>> bhkell...@optusnet.com.au> wrote:
>>
>>> On 13/08/2017 12:04 am, Stathis Papaioannou wrote:
>>>
>>> On Sat, 12 Aug 2017 at 4:52 pm, Bruce Kellett <bhkell...@optusnet.com.au>
>>> wrote:
>>>
>>>> On 12/08/2017 1:42 pm, Stathis Papaioannou wrote:
>>>>
>>>>
>>>> First person experience is individual and private. The third person
>>>> point of view is the view of an external observer. Suppose person A is
>>>> observed laughing by person B. The behaviour - the laughing - can be
>>>> observed by anyone; this is the third person point of view. Person A might
>>>> be experiencing happiness or amusement; this is the first person point of
>>>> view and only person A himself has it. Finally, person B has visual and
>>>> auditory experiences and knowledge of the outside world (there are laughing
>>>> entities in it), and this is again from the first person point of view. I
>>>> would say that knowledge is a type of experience, and therefore always
>>>> first person and private; information is that which is third person
>>>> communicable. But perhaps this last point is a matter of semantics.
>>>>
>>>>
>>>> If your knowledge is gained from someone else, it is necessarily
>>>> communicable information, and thus third person. First person is your
>>>> personal experience, which is not communicable. However, knowledge gained
>>>> by experience is communicable, and thus third person. Otherwise, all that
>>>> you say above is mere logic chopping.
>>>>
>>>
>>> Most first person experiences are based on third person information,
>>> namely sensory data.
>>>
>>>
>>> How is sensory data 'third person information'? That would make
>>> everything 3p, and you have eliminated the first person POV. If I
>>> experience the pleasure of sitting in the sun on a fine spring morning,
>>> that is surely a first person experience, and entirely sensory in origin.
>>>
>>> Even a priori knowledge, such mathematical knowledge, starts with
>>> learning about the subjectvfrom outside sources.
>>>
>>> Returning to the point, why were you claiming that the subject on a
>>> duplication experiment cannot have first person knowledge of duplication?
>>> That would mean no-one could ever have first person knowledge of anything.
>>>
>>>
>>> If you go into the duplicating machine without being told what it is,
>>> then you are duplicated and come out in Moscow, you will know that you have
>>> been transported from Helsinki, but how can you know anything about any
>>> duplicates? As far as you know -- not knowing the protocol -- you could
>>> simply have been rendered unconscious and flown to Moscow. How does 1p
>>> experience tell the difference?
>>>
>>> This is why I think some 3p is being mixed in with 1p experiences in
>>> this duplication protocol. The subject only knows the protocol by being
>>> told about it. How does he know he is not being lied to?
>>>
>>
>> This is the case with any experience whatsoever: you come to a conclusion
>> about what has happened based on your observations and deductions, but you
>> could be mistaken.
>>
>>
>> That would appear to put a large hole in Bruno's distinction between
>> quanta and qualia. The sensation of the sun on my face is veridicial -- I
>> might be mistaken about it being the sun, but the sensation is
>> incontrovertible. But things that I am told about are in a different
>> category -- I have no immediate incontrovertible experience associated with
>> them. I am aware of words being spoken, but I am not immediately aware of
>> their veracity.
>>
>
> You feel the Sun on your face, see the Sun in the sky and make deductions
> about a hot, bright object in space. It is an analogous process when you
> hear human speech and come to conclusions about the world.
>
>
> And I compare notes with other people so that I can be assured that I am
> not totally deceived. Thus such knowledge

Re: A profound lack of profundity

2017-08-12 Thread Stathis Papaioannou
On 13 August 2017 at 11:16, John Clark <johnkcl...@gmail.com> wrote:

> On Sat, Aug 12, 2017  Stathis Papaioannou <stath...@gmail.com> wrote:
>
> ​> ​
>> You call yourself "I" and I call myself "I", simultaneously, and we don't
>> fight over who deserves the title, because that is how pronouns work.
>>
>
> ​Pronouns work fine today because nobody has yet made a "I" duplicating
> machine, but when they do the English language is going to need a massive
> overhaul.
>

There are already billions of "I's" in the world without such a duplicating
machine, and no-one is confused by the large number of them.


> ​> ​
>> If the bet is that "SP1 will see W" then this will happen as a matter of
>> definition, and it isn't an interesting bet.
>>
>
> ​I know that's the problem. But there are worse things than being
> uninteresting, like being incoherent.   ​
>
>
> ​> ​
>> If the bet is that "I will see W" it may look like a similar bet but it
>> is in fact different
>>
>
> ​I agree it is different. One may be boring and predictable and pointless
> but it ​is a bet, the other is not a bet, it's just a sequence of words.
>
>
>
>> ​> ​
>> So one copy wins the bet
>>
>
> ​So which one wins the bet? The one that sees W. But which one sees W?
> The one that wins the bet. And round and round we go.​
>
>
> ​> ​
>> and everyone agrees on this.
>>
>
> ​And everyone correctly predicted this. And I correctly predicted it would
> be silly.
>

If the subject predicts, prior to duplication, "I will see W" then one copy
will be correct and the other copy will be wrong. If the subject predicts,
prior to duplication,  "SP1 will see W" then both copies will be correct.
They are different bets, but both are coherent, and it is clear who has won
and who has lost.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: A profound lack of profundity

2017-08-12 Thread Stathis Papaioannou
On Sun, 13 Aug 2017 at 9:19 am, Bruce Kellett <bhkell...@optusnet.com.au>
wrote:

> On 13/08/2017 9:05 am, Stathis Papaioannou wrote:
>
> On 13 August 2017 at 08:48, Bruce Kellett <bhkell...@optusnet.com.au>
> wrote:
>
>> On 13/08/2017 12:04 am, Stathis Papaioannou wrote:
>>
>> On Sat, 12 Aug 2017 at 4:52 pm, Bruce Kellett <
>> <bhkell...@optusnet.com.au>bhkell...@optusnet.com.au> wrote:
>>
>>> On 12/08/2017 1:42 pm, Stathis Papaioannou wrote:
>>>
>>>
>>> First person experience is individual and private. The third person
>>> point of view is the view of an external observer. Suppose person A is
>>> observed laughing by person B. The behaviour - the laughing - can be
>>> observed by anyone; this is the third person point of view. Person A might
>>> be experiencing happiness or amusement; this is the first person point of
>>> view and only person A himself has it. Finally, person B has visual and
>>> auditory experiences and knowledge of the outside world (there are laughing
>>> entities in it), and this is again from the first person point of view. I
>>> would say that knowledge is a type of experience, and therefore always
>>> first person and private; information is that which is third person
>>> communicable. But perhaps this last point is a matter of semantics.
>>>
>>>
>>> If your knowledge is gained from someone else, it is necessarily
>>> communicable information, and thus third person. First person is your
>>> personal experience, which is not communicable. However, knowledge gained
>>> by experience is communicable, and thus third person. Otherwise, all that
>>> you say above is mere logic chopping.
>>>
>>
>> Most first person experiences are based on third person information,
>> namely sensory data.
>>
>>
>> How is sensory data 'third person information'? That would make
>> everything 3p, and you have eliminated the first person POV. If I
>> experience the pleasure of sitting in the sun on a fine spring morning,
>> that is surely a first person experience, and entirely sensory in origin.
>>
>> Even a priori knowledge, such mathematical knowledge, starts with
>> learning about the subjectvfrom outside sources.
>>
>> Returning to the point, why were you claiming that the subject on a
>> duplication experiment cannot have first person knowledge of duplication?
>> That would mean no-one could ever have first person knowledge of anything.
>>
>>
>> If you go into the duplicating machine without being told what it is,
>> then you are duplicated and come out in Moscow, you will know that you have
>> been transported from Helsinki, but how can you know anything about any
>> duplicates? As far as you know -- not knowing the protocol -- you could
>> simply have been rendered unconscious and flown to Moscow. How does 1p
>> experience tell the difference?
>>
>> This is why I think some 3p is being mixed in with 1p experiences in this
>> duplication protocol. The subject only knows the protocol by being told
>> about it. How does he know he is not being lied to?
>>
>
> This is the case with any experience whatsoever: you come to a conclusion
> about what has happened based on your observations and deductions, but you
> could be mistaken.
>
>
> That would appear to put a large hole in Bruno's distinction between
> quanta and qualia. The sensation of the sun on my face is veridicial -- I
> might be mistaken about it being the sun, but the sensation is
> incontrovertible. But things that I am told about are in a different
> category -- I have no immediate incontrovertible experience associated with
> them. I am aware of words being spoken, but I am not immediately aware of
> their veracity.
>

You feel the Sun on your face, see the Sun in the sky and make deductions
about a hot, bright object in space. It is an analogous process when you
hear human speech and come to conclusions about the world.

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: A profound lack of profundity

2017-08-12 Thread Stathis Papaioannou
On 13 August 2017 at 08:48, Bruce Kellett <bhkell...@optusnet.com.au> wrote:

> On 13/08/2017 12:04 am, Stathis Papaioannou wrote:
>
> On Sat, 12 Aug 2017 at 4:52 pm, Bruce Kellett <bhkell...@optusnet.com.au>
> wrote:
>
>> On 12/08/2017 1:42 pm, Stathis Papaioannou wrote:
>>
>>
>> First person experience is individual and private. The third person point
>> of view is the view of an external observer. Suppose person A is observed
>> laughing by person B. The behaviour - the laughing - can be observed by
>> anyone; this is the third person point of view. Person A might be
>> experiencing happiness or amusement; this is the first person point of view
>> and only person A himself has it. Finally, person B has visual and auditory
>> experiences and knowledge of the outside world (there are laughing entities
>> in it), and this is again from the first person point of view. I would say
>> that knowledge is a type of experience, and therefore always first person
>> and private; information is that which is third person communicable. But
>> perhaps this last point is a matter of semantics.
>>
>>
>> If your knowledge is gained from someone else, it is necessarily
>> communicable information, and thus third person. First person is your
>> personal experience, which is not communicable. However, knowledge gained
>> by experience is communicable, and thus third person. Otherwise, all that
>> you say above is mere logic chopping.
>>
>
> Most first person experiences are based on third person information,
> namely sensory data.
>
>
> How is sensory data 'third person information'? That would make everything
> 3p, and you have eliminated the first person POV. If I experience the
> pleasure of sitting in the sun on a fine spring morning, that is surely a
> first person experience, and entirely sensory in origin.
>
> Even a priori knowledge, such mathematical knowledge, starts with learning
> about the subjectvfrom outside sources.
>
> Returning to the point, why were you claiming that the subject on a
> duplication experiment cannot have first person knowledge of duplication?
> That would mean no-one could ever have first person knowledge of anything.
>
>
> If you go into the duplicating machine without being told what it is, then
> you are duplicated and come out in Moscow, you will know that you have been
> transported from Helsinki, but how can you know anything about any
> duplicates? As far as you know -- not knowing the protocol -- you could
> simply have been rendered unconscious and flown to Moscow. How does 1p
> experience tell the difference?
>
> This is why I think some 3p is being mixed in with 1p experiences in this
> duplication protocol. The subject only knows the protocol by being told
> about it. How does he know he is not being lied to?
>

This is the case with any experience whatsoever: you come to a conclusion
about what has happened based on your observations and deductions, but you
could be mistaken.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


  1   2   3   4   5   6   7   8   9   10   >