Re: UD* and consciousness

2012-02-23 Thread meekerdb

On 2/23/2012 6:00 PM, Terren Suydam wrote:

On Thu, Feb 23, 2012 at 7:21 PM, meekerdb  wrote:

On 2/23/2012 2:49 PM, Terren Suydam wrote:

As wild or counter-intuitive as it may be though, it really has no
consequences to speak of in the ordinary, mundane living of life. To
paraphrase Eliezer Yudkowsky, "it has to add up to normal". On the
other hand, once AGIs start to appear, or we begin to merge more
explicitly with machines, then the theories become more important.
Perhaps then comp will be made illegal, so as to constrain freedoms
given to machines.  I could certainly see there being significant
resistance to humans augmenting their brains with computers... maybe
that would be illegal too, in the interest of control or keeping a
level playing field. Is that what you mean?


There will be legal and ethical questions about how we and machines should
treat one another. Just being conscious won't mean much though.  As Jeremy
Bentham said of animals, "It's not whether they can think, it's whether they
can suffer."

Brent

That brings up the interesting question of how you could explain which
conscious beings are capable of suffering and which ones aren't. I'm
sure some people would make the argument that anything we might call
conscious would be capable of suffering. One way or the other it would
seem to require a theory of consciousness in which the character of
experience can be mapped somehow to 3p processes.

For instance, pain I can make sense of in terms of what it feels like
for a being's structure to become "less organized" though I'm not sure
how to formalize that, and I'm not completely comfortable with that
characterization. However, the reverse idea that pleasure might be
what it feels like for one's structure to become "more organized"
seems like a stretch and hard to connect with the reality of, for
example, a nice massage.


I don't think becoming more or less organized has any direct bearing on pain or pleasure. 
Physical pain and pleasure are reactions built-in by evolution for survival benefits. If a 
fire makes you too hot, you move away from it, even though it's not "disorganizing" you.  
On the other hand, cancer is generally painless in its early stages.  And psychological 
suffering can be very bad without any physical damage.  I don't think suffering requires 
consciousness, at least not human-like consciousness, but psychological suffering might 
require consciousness in the form of self-reflection.


Brent




Terren



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes Doctor circularity

2012-02-23 Thread Pierz
Let us suppose you're right and... but hold on! We can't do that. That
would be "circular". That would be sneaking in the assumption that
you're right from the outset. That would be "shifty', "fishy", etc
etc. You just don't seem to grasp the rudiments of philosophical
reasoning. 'Yes doctor' is not an underhand move. It asks you up-front
to assume that comp is true in order then to examine the implications
of that, whilst acknowledging (by calling it a 'bet') that this is
just a hypothesis, an unprovable leap of faith. You complain that
using the term 'bet' assumes non-comp (I suppose because computers
can't bet, or care about their bets), but that is just daft. You might
as well argue that the UDA is invalid because it is couched in natural
language, which no computer can (or according to you, could ever)
understand. If we accepted such arguments, we'd be incapable of
debating comp at all.

Saying 'no' to the doctor is anyone's right - nobody forces you to
accept that first step or tries to pull the wool over your eyes if you
choose to say 'yes'. Having said no you can then either say "I don't
believe in comp because (I just don't like it, it doesn't feel right,
it's against my religion etc)" or you can present a rational argument
against it. That is to say, if asked to justify why you say no, you
can either provide no reason and say simply that you choose to bet
against it - which is OK but uninteresting - or you can present some
reasoning which attempts to refute comp. You've made many such
attempts, though to be honest all I've ever really been able to glean
from your arguments is a sort of impressionistic revulsion at the idea
of humans being computers, yet one which seems founded in a
fundamental misunderstanding about what a computer is. You repeatedly
mistake the mathematical construct for the concrete, known object you
use to type up your posts. This has been pointed out many times, but
you still make arguments like that thing about one's closed eyes being
unlike a switched-off screen, which verged on ludicrous.

I should say I'm no comp proponent, as my previous posts should
attest. I'm agnostic on the subject, but at least I understand it.
Your posts can make exasperating reading.


On Feb 24, 8:14 am, Craig Weinberg  wrote:
> On Feb 23, 3:25 pm, 1Z  wrote:
>
> > On Feb 22, 7:42 am, Craig Weinberg  wrote:
>
> > > Has someone already mentioned this?
>
> > > I woke up in the middle of the night with this, so it might not make
> > > sense...or...
>
> > > The idea of saying yes to the doctor presumes that we, in the thought
> > > experiment, bring to the thought experiment universe:
>
> > > 1. our sense of own significance (we have to be able to care about
> > > ourselves and our fate in the first place)
>
> > I can't see why you would think that is incompatible with CTM
>
> It is not posed as a question of 'Do you believe that CTM includes X',
> but rather, 'using X, do you believe that there is any reason to doubt
> that Y(X) is X.'
>
>
>
> > > 2. our perceptual capacity to jump to conclusions without logic (we
> > > have to be able feel what it seems like rather than know what it
> > > simply is.)
>
> > Whereas that seems to be based on a mistake. It might be
> > that our conclusions ARE based on logic, just logic that
> > we are consciously unaware of.
>
> That's a good point but it could just as easily be based on
> subconscious idiopathic preferences. The patterns of human beings in
> guessing and betting vary from person to person whereas one of the
> hallmarks of computation is to get the same results. By default,
> everything that a computer does is mechanistic. We have to go out of
> our way to generate sophisticated algorithms to emulate naturalistic
> human patterns. Human development proves just the contrary. We start
> out wild and willful and become more mechanistic through
> domestication.
>
> > Altenatively, they might
> > just be illogical...even if we are computers. It is a subtle
> > fallacy to say that computers run on logic: they run on rules.
>
> Yes! This is why they have a trivial intelligence and no true
> understanding. Rule followers are dumb. Logic is a form of
> intelligence which we use to write these rules that write more rules.
> The more rules you have, the better the machine, but no amount of
> rules make the machine more (or less) logical. Humans vary widely in
> their preference for logic, emotion, pragmatism, leadership, etc.
> Computers don't vary at all in their approach. It is all the same rule
> follower only with different rules.
>
> > They have no guarantee to be rational. If the rules are
> > wrong, you have bugs. Humans are known to have
> > any number of cognitive bugs. The "jumping" thing
> > could be implemented by real or pseudo randomness, too.
>
> > > Because of 1, it is assumed that the thought experiment universe
> > > includes the subjective experience of personal value - that the
> > > patient has a stake, or 'money to bet'.
>
> > What's the p

Re: UD* and consciousness

2012-02-23 Thread Terren Suydam
On Thu, Feb 23, 2012 at 7:21 PM, meekerdb  wrote:
> On 2/23/2012 2:49 PM, Terren Suydam wrote:
>
> As wild or counter-intuitive as it may be though, it really has no
> consequences to speak of in the ordinary, mundane living of life. To
> paraphrase Eliezer Yudkowsky, "it has to add up to normal". On the
> other hand, once AGIs start to appear, or we begin to merge more
> explicitly with machines, then the theories become more important.
> Perhaps then comp will be made illegal, so as to constrain freedoms
> given to machines.  I could certainly see there being significant
> resistance to humans augmenting their brains with computers... maybe
> that would be illegal too, in the interest of control or keeping a
> level playing field. Is that what you mean?
>
>
> There will be legal and ethical questions about how we and machines should
> treat one another. Just being conscious won't mean much though.  As Jeremy
> Bentham said of animals, "It's not whether they can think, it's whether they
> can suffer."
>
> Brent

That brings up the interesting question of how you could explain which
conscious beings are capable of suffering and which ones aren't. I'm
sure some people would make the argument that anything we might call
conscious would be capable of suffering. One way or the other it would
seem to require a theory of consciousness in which the character of
experience can be mapped somehow to 3p processes.

For instance, pain I can make sense of in terms of what it feels like
for a being's structure to become "less organized" though I'm not sure
how to formalize that, and I'm not completely comfortable with that
characterization. However, the reverse idea that pleasure might be
what it feels like for one's structure to become "more organized"
seems like a stretch and hard to connect with the reality of, for
example, a nice massage.

Terren

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: UD* and consciousness

2012-02-23 Thread meekerdb

On 2/23/2012 2:49 PM, Terren Suydam wrote:

As wild or counter-intuitive as it may be though, it really has no
consequences to speak of in the ordinary, mundane living of life. To
paraphrase Eliezer Yudkowsky, "it has to add up to normal". On the
other hand, once AGIs start to appear, or we begin to merge more
explicitly with machines, then the theories become more important.
Perhaps then comp will be made illegal, so as to constrain freedoms
given to machines.  I could certainly see there being significant
resistance to humans augmenting their brains with computers... maybe
that would be illegal too, in the interest of control or keeping a
level playing field. Is that what you mean?


There will be legal and ethical questions about how we and machines should treat one 
another. Just being conscious won't mean much though.  As Jeremy Bentham said of animals, 
"It's not whether they can think, it's whether they can suffer."


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: UD* and consciousness

2012-02-23 Thread Terren Suydam
On Thu, Feb 23, 2012 at 4:12 AM, Bruno Marchal  wrote:
>
> On 22 Feb 2012, at 23:07, Terren Suydam wrote:
> Here was the "aha!" moment. I get it now. Thanks to you and Quentin.
> Even though I am well aware of the consequences of MGA, I was focusing
> on the "physical activity" of the simulation because "I" was running
> it.
>
>
> Yes, that's why reasoning and logic is important. It is understandable that
> evolution could not have prepared us to the possibly true 'big picture", nor
> for fundamental science, nor for quickly developing technologies. So it
> needs some effort to abstract us from build-in prejudices. Nature, a bit
> like bandits, is opportunist. At the same time we don't have to brush away
> that intuition, because it is real, and it has succeeded to bring us here
> and now, and that has to be respected somehow too.
> Note that the math confirms this misunderstanding between the
> heart/intuition/first-person/right-brain (modeled by Bp & p) and the
> scientist/reasoner/left-brain (modeled by Bp). The tension appears right at
> the start, when a self-aware substructure begin to differentiate itself from
> its neighborhood.
>
>
>
>
> The fascinating thing for me is, if instead of a scan of Mary, we run
> an AGI that embodies a cognitive architecture that satisfies a theory
> of consciousness (the kind of theory that explains why a particular UM
> is conscious) so that if we assume the theory, it entails that the AGI
> is conscious. The AGI will therefore have 1p indeterminacy even if the
> sim is deterministic, for the same reason Mary does, because there are
> an infinity of divergent computational paths that go through the AGI's
> 1p state in any given moment. Trippy!
>
>
> Yeah. "Trippy" is the word.
> Many people reacts to comp in a strikingly similar way than other numerous
> people react to the very potent Salvia divinorum hallucinogen. People needs
> a very sincere interest in the fundamentals to appreciate the comp
> consequence, or to appreciate potent dissociative hallucinogen.
> I should not insist on this. Some would conclude we should make comp
> illegal. Like "thinking by oneself" is never appreciated in the neighborhood
> of those who want to think for the others, and control/manipulate them.

As wild or counter-intuitive as it may be though, it really has no
consequences to speak of in the ordinary, mundane living of life. To
paraphrase Eliezer Yudkowsky, "it has to add up to normal". On the
other hand, once AGIs start to appear, or we begin to merge more
explicitly with machines, then the theories become more important.
Perhaps then comp will be made illegal, so as to constrain freedoms
given to machines.  I could certainly see there being significant
resistance to humans augmenting their brains with computers... maybe
that would be illegal too, in the interest of control or keeping a
level playing field. Is that what you mean?

Terren

> This I disagree with (or don't understand) because if we acknowledge
> that as you said "even just one emulation can be said involving
> consciousness" then interacting with even a "single" Mary is an
> interaction with her "soul" in platonia. I think the admission of any
> zombie in any context (assuming comp) is a refutation of comp.
>
>
> You are right. That's why I prefer to say that comp entails non zombie. But
> let me give you a thought experience which *seems* to show that a notion of
> zombie looks possible with comp, and let us see what is wrong with that.
>
> Let us start from the beginning of MGA, or quite similar. You have a teacher
> doing a course in math (say). Then, by some weird event, his brain vanishes,
> but a cosmic explosion, by an extreme luck, send the correct information,
> with respect to that very particular math lesson, at the entry of the motor
> nerves interfaces to the muscles of the teacher, so that the lesson continue
> like normal. The students keep interrupting the teacher, asking questions,
> and everything is fine; the teacher provides the relevant answers (by luck).
> Is the teacher-without-brain a zombie? At first sight, it looks like one,
> even with comp. He behaves like a human, but the processing in the brain is
> just absent. He acts normal by pure chance, with a very small amount of
> peripheral interface brain activity. So what?
> Again, the solution is that the consciousness should not be attributed to
> the body activity, but to the teaching person and its logically real genuine
> computation (distributed in Platonia). The "concrete brain" just interfaces
> the person in a relative correct way, unlike the "absent brain + lucky
> cosmic ray", which still attaches it, in this experience, but by pure luck.
> In both case, with "real brain" or "without a brain", the consciousness is
> attached to the computations, not a particular implementation of it which in
> fine is a building of your mind itself attached to an infinity of
> computation.
>
> We might say that the teacher was a zombie, because h

Re: The free will function

2012-02-23 Thread Craig Weinberg
On Feb 23, 3:57 pm, 1Z  wrote:
> On Feb 23, 7:43 pm, Craig Weinberg  wrote:
>
>
>
>
>
>
>
>
>
> > On Feb 23, 11:18 am, 1Z  wrote:
>
> > > > > >    > > > > > Why would Gods be supernatural?
>
> > > > > >    > > > > Why would bachelors be married?
>
> > > > > > This is your argument, not mine. My whole point is that God becomes
> > > > > > natural, and inevitable under MWI + Comp.
>
> > > > > My point is that that argument requires the meaning of "god" to
> > > > > change, and, since language us public, you don't get to change it
> > > > > unilaterally.
>
> > > > It changes a little every time you use it.
>
> > > There's an important difference between "it changes" and "I am going
> > > to change it".
>
> > Not for me.
>
> Then you are wrong.

No, I'm just in control of my own expression. I don't need permission
alter it.

>
> > > >That's how words work.
>
> > > That is one side of the picture. Shared meaning is the other.
>
> > That's what I'm saying, meaning is shared in between the lines. It
> > doesn't rely on adhering to linguistic conventions strictly.
>
> "between the lines" is a vague, meaningless metaphor.

not at all. it is a tremendously successful and ubiquitous metaphor.
It's in no way vague. It specifies precisely that communication is
carried by the figurative gaps between words, not merely by the lines
on the page. You have to connect the dots, figure it out, get to the
point, see what they mean, etc.

>
> Common meaning. OTOH, literallu is adhering to linguistic convention.
>
> > > > > I don't know how to get accross to you that it is about WHAT THE
> > > > > WORD GOD MEANS.
>
> > > > I don't argue about what words mean.
>
> > > No: you don;t pay attention to the issue and so
> > > end up miscommunicating and talking past people.
>
> > That happens with some people and not with others.
>
> Who have you succeeded in explaining yourself to?

I get almost entirely positive feedback from my blogs. It's only here
and places like this where people complain.

>
> > Different ways of
> > thinking use words differently. I'm never trying to talk past people,
>
> I didn't suggest it was literal.

You mean intentional? See - I was able to read in between the lines
and see what you meant. Without a dictionary.

>
> > > > > > We can invent as many words for it as we want, but none will be any
> > > > > > more or less appropriate than God.
>
> > > > > Says who?
>
> > > > Who doesn't say?
>
> > > Me.
>
> > Why though?
>
> Because God has implications about who created the whole Shebang, and
> not

Maybe to you. I didn't grow up in a religious family. 'God' has always
been a creepy pyramid scheme to me. Besides, the Matrix Lord is the
creator of simworld.

> just about which fallibel entity is able to lord it over even more
> fallible
> ones in the next layer down. "Matrix Lord" is fine though.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: The free will function

2012-02-23 Thread Craig Weinberg
On Feb 23, 3:51 pm, 1Z  wrote:
>
> > That's because you aren't taking the simulation seriously.
>
> Or because I am taking truth seriously.

Seriously and literally are two different things.
>
> > You are
> > thinking that because you know it's a simulation it means that the
> > observers within are subject to truths outside of the simulation
>
> I don't know what you mean by "subject to". They may well not
> be able to arrive at the actual facts beyond the simulation at all.

Which is why they can't call them actual facts. To them, the
simulation is the only facts. They do not exist outside of the
simulation.

> But that is an observation that *depends* on truth having a
> transcendent and objective nature. If truth is just what seems
> to you to be true, then they have the truth, as does every lunatic.

You could make a simulation where the simulation changes to fit the
delusions of a lunatic. You could even make them all lunatics and make
their consciousness completely solipsistic.

>
> . In
>
> > comp though, it's all simulation. The only truly universal truths are
> > arithmetic ones.
>
> That only arithmetic truth is truly true is not an arithmetic truth.
> But
> is is, as you put it, "unviersal".

Universal only means that it is maximally common, not that it absolute
or cannot be changed. In some MWI universe I might be able to make a
simulation in which linguistic truth is fundamental instead and
arithmetic truth is not universal. It could be populated by parrots
who can't do math for shit but talk up a storm.

>
> > > > > Same problem.
>
> > > > Same linguistic literalism.
>
> > > You say that like its a bad thing.
>
> > Not a bad thing, just an inappropriate thing for talking about fantasy
> > simulations.
>
> No. Fantasy can be expressed in literal language. In fact,
> it is better to do so, since the reader does not have to
> deal with the communicative double whammy of of
> weird ideas expressed in a weird way.

Or it could lead to this  http://www.youtube.com/watch?v=HiMD12xKOig

>
> > Pipe fittings maybe, or legal analysis, but you are not
> > going to find the secrets of consciousness by pointing at a
> > dictionary.
>
> I recommend using publically accessble language
> to enhance communication, not to discover new
> facts.

I would rather enhance the content of the communication than the form.

>
> > > > That has almost nothing to do with my argument. You are off in
> > > > dictionary land. The fact remains that comp, rather than disallowing
> > > > gods, makes it impossible to know if a Matrix Lord/Administrator has
> > > > control over aspects of your life.
>
> > > That is a fact, when expressed properly.
>
> > How would you express it?
>
> Not using the word "god"

Ohh, ok. My point in that though is to show how god is really no
different from an administrator of a simulation that you are part of,
and that such a simulation is inevitable under comp.

>
> > > > What traditional meaning does 'supernatural' have in Comp?
>
> > > Why assume it has a non tradtional one.
>
> > Because comp hasn't been around long enough to have traditions.
>
> That doesn't answer the question. You are proceding as if the meaning
> of
> a word *always* changes in different contexts.

It does. It even changes within the same same context contextsince
no two contexts are really completely the same. Meaning isn't an
object. It has no fixed structure, it is figurative.

>
> > > Because we can communicate if we stick to accepted meanings,
> > > and communiction breaks down if you have a free hand to use
> > > invented meanings.
>
> > Just the opposite. Communication breaks down if you tie my hands to
> > express new ideas in their native terms. Should discussions about
> > early automotive horsepower been limited to literal horses?
>
> That;s a poor example. Horsepower is literallty the power of one
> horse.

When you get done shoving 200 of them into a Honda, let me know so I
can see what literal horses look like. There is no such thing as the
literal power of a horse. It is an second order logic - a figure which
we use to represent a non-literal constellation of physical measures.

>
> > > > Your argument now seems to be a word definition argument.
>
> > > You say that like its a bad thing.
>
> > Not a bad thing, just not my thing. I don't do word definitions. I
> > don't believe in them.
>
> Have you never seen a dictionary?

I believe in dictionaries, but not definitions. I believe in movie
critics but I don't believe that their opinions about movies are
objectively true. I might agree with them, but that doesn't mean that
it is possible for an opinion to be authoritatively definitive.

>
> > > Yes It is true that a game is being played, not just true-for-the-
> > > layers.
> > > Likewise, the simulation hypothesis requires simulation
> > > to be actually true and not just true-for.
>
> > That was not my question. I asked if I score a point in a game, is
> > that the truth that I scored a point.
>
> It;s tr

Re: Yes Doctor circularity

2012-02-23 Thread Quentin Anciaux
2012/2/23 Craig Weinberg 

> On Feb 23, 12:57 pm, Quentin Anciaux  wrote:
> > 2012/2/23 Craig Weinberg 
>
> >
> > > > > Comp has no ability to contradict itself,
> >
> > > > You say so.
> >
> > > Is it not true?
> >
> > no it is not true.. for example, proving consciousness cannot be emulate
> on
> > machines would proves computationalism wrong.
>
> Consciousness isn't falsifiable in the first place.
>

And you know that how ? Because you said if a machine acted in every way as
a human being you would still says it is not conscious... but you can't say
that if consciousness wasn't falsifiable. How could you know a disproof
before knowing it ? You asked for how to falsify computationalism, showing
consciousness cannot be emulated on machines is enough whatever the proof
is.

>
> > Showing an infinite
> > components necessary for consciousnes would prove computationalism wrong,
>
> Also not falsifiable.


It is, just show a component of human consciousness which *cannot* be
described in finite terms, and is necessary for consciousness.


> I can't prove that you are conscious or that you
> don't require infinite components.
>
> > showing that a biological neurons is necessary for consciousness would
> > prove computationalism wrong... and so on.
>
> Not possible to prove,


Why wouldn't it be possible to prove... Prove that it's not possible to
prove first...


> but possible to nearly disprove if you walk
> yourself off of your brain onto a digital brain and back on.
>
> Craig
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To post to this group, send email to everything-list@googlegroups.com.
> To unsubscribe from this group, send email to
> everything-list+unsubscr...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/everything-list?hl=en.
>
>


-- 
All those moments will be lost in time, like tears in rain.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Support for Panexperientialism

2012-02-23 Thread Craig Weinberg
On Feb 23, 4:00 pm, 1Z  wrote:

>
> > He isn't saying it's special, he is asking why should we think that
> > consciousness arises as some exceptional phenomenon in the universe.
>
> Every phenomenon is exceptional.

Not in the sense that they are disconnected from all other processes
of the universe.

>
>
>
>
>
>
>
>
>
> > Why is such an 'arising' assumed?
>
> > > >and how we can assume that it isn't
> > > > universal in some sense if we can't point to what that might be.
>
> > > > > But it isn;t at all pbvious that
> > > > > "we don't understand consc" should imply panexperientialism rather
> > > > > than dualism or physicalism or a dozen other options. Almost
> > > > > all the philosophy of mind starts with "we don't understand consc"
>
> > > > That's not what he is saying. His point is that what we do understand
> > > > about physics makes it obvious that consc cannot be understood as some
> > > > special case that is disconnected from the rest of the universe.
>
> > > That isn't obvious, since there are plenty of physcialists about
> > > consc.
> > > around. And he didn;t mention physics.
>
> > He didn't mention physics but when he talks about 'disconnects' he is
> > referring to any theoretical discontinuity between consc and the
> > natural universe.
>
> Whatever. It might be better to take as your text the writings of a
> notable
> panpsychist (Whitehead, Strawson, Chalmers) as your text, rather than
> Random Internet
> Dude.
>

I was mainly posting the quotes. I was not expecting to have to defend
the casual Tumblr comments of Random Internet Dude. Not that they
aren't decent. I find them generally agreeable.

> > > > > >And don’t get me started
> > > > > > on the nonsense superstition of “emergent properties” — show me one
> > > > > > “emergent property” that is independent of the conscious observer
> > > > > > coming to the conclusion it is emergent.
>
> > > > > The problem with emergence is that is defined so many ways.
> > > > > For some values of "emergent", emergent properties are
> > > > > trivially demonstrable.
>
> > > > Demonstrable = compels the conclusion that it is emergent to a
> > > > conscious observer.
>
> > > Epistemologically dependent =/= ontologically dependent.
>
> > The ontology of emergence is epistemological.
>
> Says who?

What does emergence mean? It means it's something that *we* don't
expect to see based on *what we think that we know* about the
underlying causes and conditions of the emergence. Emergence has no
ontology, that's the point, it is not a chemical reaction that
transforms one thing or another, it is our perception alone that
compels us to consider it one thing rather than a microcosm of related
things. In 'reality' we are to see that there is no eye of the
hurricane, it's just an emergent property of the meteorology.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Support for Panexperientialism

2012-02-23 Thread John Mikes
Dear Craig,
 my first step was to join Quora but it asked for my password what I denied
to disclose to Facebook and other 'social' networks as well (staying
private).
In the quoted excerpt were wise thoughts (time-scale etc.) but it did not
address my main point: whatever we THINK about that 'thing' (rather: about
that process) Ccness stems from Within our past human experience (maybe
replenished to the present level). In my agnostic views there is more to
such a universal relation than whatever we CAN know as of today.
So with a proper definition: a rock, or an idea, can have Ccness, once we
imagine as 'rock' something more reliable than our ever-growing partial
knowledge about the 'world' (beyond the model formed from our already
achieved informational explanations).
Then I subscribe to the 'obviously'.

Those (and other) genius physicists quoted in your post owe us the
explanation how to connect the partial human explanatory thoughts to the
working technology based on the same. Although IMO our technology is ALMOST
good, there are surprising mishaps occurring in all fields we have.

So how would you connect the "rock" with "Ccness"? your examples (e.g.
magnetism etc.) are also physically imagined phenomena, measured by
instruments constructed for measuring such imagined phenomena.

After the Tibetan wisdom (matter is derived from mind) you wrote:

*   On this, I think Bruno, Stephen, and I agree. Where I disagree with
   comp is that I see the stuff of the mind as not just numberstuff, but
   sense. *
Then you postulate
 * How I think it works is through a multisense realism*
**
and we try to 'realize' - "A" - realism (ONE sense)  over historical
fantasies.
A multi sense symmetry is beyond us, even a detailed Hilbert space
explanation is more than the average mind can follow. I accept the "I
dunno", but I cannot accept hints how it 'might' (or should) be since we
don't know a better way. I wonder about your 'multisense realism. Bruno
applies his arithmetical realism, others their faith-based one, but ONE.
Nobody is schizophrenic enough to think in multiple realism. So I deem your
postulate a wishful idea without practical content for us, humans, today.

Your text is beautifully written in a style out of the world. I am not up
to it.
I believe there is much more to the "world" than our capabilities of today
may cover or absorb. So I turn humble and agnostic (better than ignorant).

JM

On Wed, Feb 22, 2012 at 8:10 AM, Craig Weinberg wrote:

> Could a rock have consciousness? Good answer from someone on Quora:
> http://www.quora.com/Could-a-rock-have-consciousness
>
> "Yes, obviously.
>
>Why obviously?
>
>Well, first of all, where is the “disconnect” and what is it made
> of? Specifically, the disconnect that must occur if some parts of
> reality are “conscious” while others aren’t. And don’t get me started
> on the nonsense superstition of “emergent properties” — show me one
> “emergent property” that is independent of the conscious observer
> coming to the conclusion it is emergent.
>
>Secondly, as physicists are now starting to realize (or realise if
> you’re English/Australian):
>
>Let’s start with Prof. Freeman Dyson:
>
>“Quantum mechanics makes matter even in the smallest pieces into
> an
>active agent, and I think that is something very fundamental.
> Every
>particle in the universe is an active agent making choices between
>random processes.”2
>
>“…consciousness is not just a passive epiphenomenon carried along
> by the chemical events in our brains, but is an active agent forcing
> the molecular complexes to make choices between one quantum state and
> another. In
>other words, mind is already inherent in every electron.”3
>
>Physicist Sir Arthur Eddington
>
>“Physics is the study of the structure of consciousness. The
> “stuff” of the world is mindstuff.”
>
>and
>
>“It is difficult for the matter-of-fact physicist to accept the
> view that the substratum of everything is of mental character.”
>
>Physicist Prof. Richard Conn Henry
>
>“In what is known as a “Renninger type experiment,” the wave
> function is collapsed simply by a human mind seeing nothing. No
> irreversible act of amplification involving the photon has taken place—
> yet the decision is irreversibly made. The universe is entirely
> mental.”
>
>Prof. Amit Goswami
>
>“we have a new integrative paradigm of science, based not on
> the primacy of matter as the old science, but on the primacy of
> consciousness. Consciousness is the ground of all being…”1
>
>
>Then of course, we have been reminded by sages throughout history
> of this basic element:
>
>All phenomena are projections in the mind.
>—The Third Karmapa
>
>Matter is derived from mind, not mind from matter.
>—The Tibetan Book of the Great Liberation
>
>The list goes on."
>
> On this, I think Bruno, Stephen, and I agree. Where I disagree with
> comp is that I see the stuff of

Re: Yes Doctor circularity

2012-02-23 Thread Craig Weinberg
On Feb 23, 3:25 pm, 1Z  wrote:
> On Feb 22, 7:42 am, Craig Weinberg  wrote:
>
> > Has someone already mentioned this?
>
> > I woke up in the middle of the night with this, so it might not make
> > sense...or...
>
> > The idea of saying yes to the doctor presumes that we, in the thought
> > experiment, bring to the thought experiment universe:
>
> > 1. our sense of own significance (we have to be able to care about
> > ourselves and our fate in the first place)
>
> I can't see why you would think that is incompatible with CTM

It is not posed as a question of 'Do you believe that CTM includes X',
but rather, 'using X, do you believe that there is any reason to doubt
that Y(X) is X.'

>
> > 2. our perceptual capacity to jump to conclusions without logic (we
> > have to be able feel what it seems like rather than know what it
> > simply is.)
>
> Whereas that seems to be based on a mistake. It might be
> that our conclusions ARE based on logic, just logic that
> we are consciously unaware of.

That's a good point but it could just as easily be based on
subconscious idiopathic preferences. The patterns of human beings in
guessing and betting vary from person to person whereas one of the
hallmarks of computation is to get the same results. By default,
everything that a computer does is mechanistic. We have to go out of
our way to generate sophisticated algorithms to emulate naturalistic
human patterns. Human development proves just the contrary. We start
out wild and willful and become more mechanistic through
domestication.

> Altenatively, they might
> just be illogical...even if we are computers. It is a subtle
> fallacy to say that computers run on logic: they run on rules.

Yes! This is why they have a trivial intelligence and no true
understanding. Rule followers are dumb. Logic is a form of
intelligence which we use to write these rules that write more rules.
The more rules you have, the better the machine, but no amount of
rules make the machine more (or less) logical. Humans vary widely in
their preference for logic, emotion, pragmatism, leadership, etc.
Computers don't vary at all in their approach. It is all the same rule
follower only with different rules.

> They have no guarantee to be rational. If the rules are
> wrong, you have bugs. Humans are known to have
> any number of cognitive bugs. The "jumping" thing
> could be implemented by real or pseudo randomness, too.
>
> > Because of 1, it is assumed that the thought experiment universe
> > includes the subjective experience of personal value - that the
> > patient has a stake, or 'money to bet'.
>
> What's the problem ? the experience (quale) or the value?

The significance of the quale.

> Do you know the value to be real?

I know it to be subjective.

> Do you think a computer
> could not be deluded about value?

I think a computer can't be anything but turned off and on.

>
> > Because of 2, it is assumed
> > that libertarian free will exists in the scenario
>
> I don't see that FW of a specifically libertarian aort is posited
> in the scenario. It just assumes you can make a choice in
> some sense.

It assumes that choice is up to you and not determined by
computations.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Support for Panexperientialism

2012-02-23 Thread 1Z


On Feb 23, 8:27 pm, Craig Weinberg  wrote:
> On Feb 23, 2:45 pm, 1Z  wrote:
>
>
>
>
>
>
>
>
>
>
>
> > > > >     Well, first of all, where is the “disconnect” and what is it made
> > > > > of? Specifically, the disconnect that must occur if some parts of
> > > > > reality are “conscious” while others aren’t.
>
> > > > DIsconnects exist. Some things are magnetic and others not. And many
> > > > other examples. What's special about consc? That we don;t understand
> > > > where or what the disconnect is?
>
> > > It doesn't make consc special, but he is saying it makes it universal.
> > > Magnetism is a good example. In our naive perception it seems to us
> > > that some things are magnetic and others not, but we know that
> > > actually all atoms have electromagnetic properties.
>
> > That's a bit misleading. No atom has ferromagnetic properties,
> > such properties can only exist in bulk (they are "emergent" in
> > of the umpteen senses of the word). The elcetromagnetic properites
> > of atoms are  more
> > akin to panPROTOexperientialism..
>
> http://en.wikipedia.org/wiki/Single-molecule_magnet
>
> I think it's a bit misleading to distinguish ferromagnetism from
> electromagnetism. I wouldn't even call it emergent, it's more of a
> special case. Just as human consciousness is a special case of
> awareness. I'm ok with panprotoexperientialism though. We can't really
> know one way or another at what point the proto is dropped and have no
> particular reason to assume that there is no experience that
> corresponds to atoms, so it seems safer to assume that that awareness
> is 100% primitive instead of 100-x% primitive arbitrarily.
>
>
>
> > > He is asking what
> > > thing could make consc special
>
> > What special? He doesn't have a ny evidence
> > that cosnc is special beyond our inability to understand
> > it in material terms.
>
> He isn't saying it's special, he is asking why should we think that
> consciousness arises as some exceptional phenomenon in the universe.

Every phenomenon is exceptional.

> Why is such an 'arising' assumed?
>
>
>
>
>
>
>
>
>
>
>
> > >and how we can assume that it isn't
> > > universal in some sense if we can't point to what that might be.
>
> > > > But it isn;t at all pbvious that
> > > > "we don't understand consc" should imply panexperientialism rather
> > > > than dualism or physicalism or a dozen other options. Almost
> > > > all the philosophy of mind starts with "we don't understand consc"
>
> > > That's not what he is saying. His point is that what we do understand
> > > about physics makes it obvious that consc cannot be understood as some
> > > special case that is disconnected from the rest of the universe.
>
> > That isn't obvious, since there are plenty of physcialists about
> > consc.
> > around. And he didn;t mention physics.
>
> He didn't mention physics but when he talks about 'disconnects' he is
> referring to any theoretical discontinuity between consc and the
> natural universe.

Whatever. It might be better to take as your text the writings of a
notable
panpsychist (Whitehead, Strawson, Chalmers) as your text, rather than
Random Internet
Dude.

> > > > >And don’t get me started
> > > > > on the nonsense superstition of “emergent properties” — show me one
> > > > > “emergent property” that is independent of the conscious observer
> > > > > coming to the conclusion it is emergent.
>
> > > > The problem with emergence is that is defined so many ways.
> > > > For some values of "emergent", emergent properties are
> > > > trivially demonstrable.
>
> > > Demonstrable = compels the conclusion that it is emergent to a
> > > conscious observer.
>
> > Epistemologically dependent =/= ontologically dependent.
>
> The ontology of emergence is epistemological.

Says who?

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: The free will function

2012-02-23 Thread 1Z


On Feb 23, 7:43 pm, Craig Weinberg  wrote:
> On Feb 23, 11:18 am, 1Z  wrote:
>

> > > > >    > > > > > Why would Gods be supernatural?
>
> > > > >    > > > > Why would bachelors be married?
>
> > > > > This is your argument, not mine. My whole point is that God becomes
> > > > > natural, and inevitable under MWI + Comp.
>
> > > > My point is that that argument requires the meaning of "god" to
> > > > change, and, since language us public, you don't get to change it
> > > > unilaterally.
>
> > > It changes a little every time you use it.
>
> > There's an important difference between "it changes" and "I am going
> > to change it".
>
> Not for me.

Then you are wrong.

> > >That's how words work.
>
> > That is one side of the picture. Shared meaning is the other.
>
> That's what I'm saying, meaning is shared in between the lines. It
> doesn't rely on adhering to linguistic conventions strictly.

"between the lines" is a vague, meaningless metaphor.

Common meaning. OTOH, literallu is adhering to linguistic convention.

> > > > I don't know how to get accross to you that it is about WHAT THE
> > > > WORD GOD MEANS.
>
> > > I don't argue about what words mean.
>
> > No: you don;t pay attention to the issue and so
> > end up miscommunicating and talking past people.
>
> That happens with some people and not with others.

Who have you succeeded in explaining yourself to?

> Different ways of
> thinking use words differently. I'm never trying to talk past people,

I didn't suggest it was literal.


> > > > > We can invent as many words for it as we want, but none will be any
> > > > > more or less appropriate than God.
>
> > > > Says who?
>
> > > Who doesn't say?
>
> > Me.
>
> Why though?

Because God has implications about who created the whole Shebang, and
not
just about which fallibel entity is able to lord it over even more
fallible
ones in the next layer down. "Matrix Lord" is fine though.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: The free will function

2012-02-23 Thread 1Z


On Feb 23, 7:43 pm, Craig Weinberg  wrote:
> On Feb 23, 11:18 am, 1Z  wrote:

> > > On Feb 21, 5:41 am, 1Z  wrote:
> > > > > You are conflating the levels (as Bruno always tells me). The
> > > > > simulation has no access to extra-simulatory information, it is a
> > > > > complete sub-universe. It's logic is the whole truth which the
> > > > > inhabitants can only believe in or disbelieve to the extent which the
> > > > > simulation allows them that capacity. If the programmer wants all of
> > > > > his avatars to believe with all their hearts that there is a cosmic
> > > > > muffin controlling their universe, she has only to set the cosmic
> > > > > muffin belief subroutine = true for all her subjects.
>
> > > > Read again. I didn't say no sim could have such-and-such
> > > > an opinion, I said it would not be true.
>
> > > Your standard of truth appears to exclude any simulated content.
>
> > No, my definition of truth just doens't change to something
> > else when considering simulated contexts.
>
> That's because you aren't taking the simulation seriously.

Or because I am taking truth seriously.

> You are
> thinking that because you know it's a simulation it means that the
> observers within are subject to truths outside of the simulation

I don't know what you mean by "subject to". They may well not
be able to arrive at the actual facts beyond the simulation at all.
But that is an observation that *depends* on truth having a
transcendent and objective nature. If truth is just what seems
to you to be true, then they have the truth, as does every lunatic.

. In
> comp though, it's all simulation. The only truly universal truths are
> arithmetic ones.

That only arithmetic truth is truly true is not an arithmetic truth.
But
is is, as you put it, "unviersal".


> > > > Same problem.
>
> > > Same linguistic literalism.
>
> > You say that like its a bad thing.
>
> Not a bad thing, just an inappropriate thing for talking about fantasy
> simulations.


No. Fantasy can be expressed in literal language. In fact,
it is better to do so, since the reader does not have to
deal with the communicative double whammy of of
weird ideas expressed in a weird way.

> Pipe fittings maybe, or legal analysis, but you are not
> going to find the secrets of consciousness by pointing at a
> dictionary.

I recommend using publically accessble language
to enhance communication, not to discover new
facts.


> > > That has almost nothing to do with my argument. You are off in
> > > dictionary land. The fact remains that comp, rather than disallowing
> > > gods, makes it impossible to know if a Matrix Lord/Administrator has
> > > control over aspects of your life.
>
> > That is a fact, when expressed properly.
>
> How would you express it?

Not using the word "god"


> > > What traditional meaning does 'supernatural' have in Comp?
>
> > Why assume it has a non tradtional one.
>
> Because comp hasn't been around long enough to have traditions.


That doesn't answer the question. You are proceding as if the meaning
of
a word *always* changes in different contexts.

> > Because we can communicate if we stick to accepted meanings,
> > and communiction breaks down if you have a free hand to use
> > invented meanings.
>
> Just the opposite. Communication breaks down if you tie my hands to
> express new ideas in their native terms. Should discussions about
> early automotive horsepower been limited to literal horses?

That;s a poor example. Horsepower is literallty the power of one
horse.


> > > Your argument now seems to be a word definition argument.
>
> > You say that like its a bad thing.
>
> Not a bad thing, just not my thing. I don't do word definitions. I
> don't believe in them.

Have you never seen a dictionary?


> > Yes It is true that a game is being played, not just true-for-the-
> > layers.
> > Likewise, the simulation hypothesis requires simulation
> > to be actually true and not just true-for.
>
> That was not my question. I asked if I score a point in a game, is
> that the truth that I scored a point.

It;s true outside the game as well. Whatever you are trying
to say. it is a poor analogy. You might try asking if you are
really the top hat in Monopoly, or Throngar the Invincible in
D&D

> > You are aware they broadly support what I amsaying, eg
> > "God is most often conceived of as the supernatural creator and
> > overseer of the universe. "--WP
>
> Since we are talking about simulations within a universe, the creator
> of that simulation is the overseer of the simulated universe and
> therefore 'supernatural' relative to the simulated beings in that
> universe. This is the crucial point you are overlooking.
#
But not actually supernatural at all, if he is a geek with BO and
dandruff.
That is the point you are missing.



> > > I absolutely agree. I'm talking about how comp sees it.
>
> > Bruno;s comp.
>
> I think that all forms of comp consider the simulation independent
> from the specific hardware it runs on (

Th

Re: Support for Panexperientialism

2012-02-23 Thread Craig Weinberg
On Feb 23, 2:45 pm, 1Z  wrote:
>
> > > >     Well, first of all, where is the “disconnect” and what is it made
> > > > of? Specifically, the disconnect that must occur if some parts of
> > > > reality are “conscious” while others aren’t.
>
> > > DIsconnects exist. Some things are magnetic and others not. And many
> > > other examples. What's special about consc? That we don;t understand
> > > where or what the disconnect is?
>
> > It doesn't make consc special, but he is saying it makes it universal.
> > Magnetism is a good example. In our naive perception it seems to us
> > that some things are magnetic and others not, but we know that
> > actually all atoms have electromagnetic properties.
>
> That's a bit misleading. No atom has ferromagnetic properties,
> such properties can only exist in bulk (they are "emergent" in
> of the umpteen senses of the word). The elcetromagnetic properites
> of atoms are  more
> akin to panPROTOexperientialism..

http://en.wikipedia.org/wiki/Single-molecule_magnet

I think it's a bit misleading to distinguish ferromagnetism from
electromagnetism. I wouldn't even call it emergent, it's more of a
special case. Just as human consciousness is a special case of
awareness. I'm ok with panprotoexperientialism though. We can't really
know one way or another at what point the proto is dropped and have no
particular reason to assume that there is no experience that
corresponds to atoms, so it seems safer to assume that that awareness
is 100% primitive instead of 100-x% primitive arbitrarily.

>
> > He is asking what
> > thing could make consc special
>
> What special? He doesn't have a ny evidence
> that cosnc is special beyond our inability to understand
> it in material terms.

He isn't saying it's special, he is asking why should we think that
consciousness arises as some exceptional phenomenon in the universe.
Why is such an 'arising' assumed?

>
> >and how we can assume that it isn't
> > universal in some sense if we can't point to what that might be.
>
> > > But it isn;t at all pbvious that
> > > "we don't understand consc" should imply panexperientialism rather
> > > than dualism or physicalism or a dozen other options. Almost
> > > all the philosophy of mind starts with "we don't understand consc"
>
> > That's not what he is saying. His point is that what we do understand
> > about physics makes it obvious that consc cannot be understood as some
> > special case that is disconnected from the rest of the universe.
>
> That isn't obvious, since there are plenty of physcialists about
> consc.
> around. And he didn;t mention physics.

He didn't mention physics but when he talks about 'disconnects' he is
referring to any theoretical discontinuity between consc and the
natural universe.

>
> > > >And don’t get me started
> > > > on the nonsense superstition of “emergent properties” — show me one
> > > > “emergent property” that is independent of the conscious observer
> > > > coming to the conclusion it is emergent.
>
> > > The problem with emergence is that is defined so many ways.
> > > For some values of "emergent", emergent properties are
> > > trivially demonstrable.
>
> > Demonstrable = compels the conclusion that it is emergent to a
> > conscious observer.
>
> Epistemologically dependent =/= ontologically dependent.

The ontology of emergence is epistemological.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes Doctor circularity

2012-02-23 Thread 1Z


On Feb 22, 7:42 am, Craig Weinberg  wrote:
> Has someone already mentioned this?
>
> I woke up in the middle of the night with this, so it might not make
> sense...or...
>
> The idea of saying yes to the doctor presumes that we, in the thought
> experiment, bring to the thought experiment universe:
>
> 1. our sense of own significance (we have to be able to care about
> ourselves and our fate in the first place)

I can't see why you would think that is incompatible with CTM

> 2. our perceptual capacity to jump to conclusions without logic (we
> have to be able feel what it seems like rather than know what it
> simply is.)

Whereas that seems to be based on a mistake. It might be
that our conclusions ARE based on logic, just logic that
we are consciously unaware of. Altenatively, they might
just be illogical...even if we are computers. It is a subtle
fallacy to say that computers run on logic: they run on rules.
They have no guarantee to be rational. If the rules are
wrong, you have bugs. Humans are known to have
any number of cognitive bugs. The "jumping" thing
could be implemented by real or pseudo randomness, too.


> Because of 1, it is assumed that the thought experiment universe
> includes the subjective experience of personal value - that the
> patient has a stake, or 'money to bet'.

What's the problem ? the experience (quale) or the value?
Do you know the value to be real? Do you think a computer
could not be deluded about value?

> Because of 2, it is assumed
> that libertarian free will exists in the scenario

I don't see that FW of a specifically libertarian aort is posited
in the scenario. It just assumes you can make a choice in
some sense.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes Doctor circularity

2012-02-23 Thread Craig Weinberg
On Feb 23, 12:57 pm, Quentin Anciaux  wrote:
> 2012/2/23 Craig Weinberg 

>
> > > > Comp has no ability to contradict itself,
>
> > > You say so.
>
> > Is it not true?
>
> no it is not true.. for example, proving consciousness cannot be emulate on
> machines would proves computationalism wrong.

Consciousness isn't falsifiable in the first place.

> Showing an infinite
> components necessary for consciousnes would prove computationalism wrong,

Also not falsifiable. I can't prove that you are conscious or that you
don't require infinite components.

> showing that a biological neurons is necessary for consciousness would
> prove computationalism wrong... and so on.

Not possible to prove, but possible to nearly disprove if you walk
yourself off of your brain onto a digital brain and back on.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes Doctor circularity

2012-02-23 Thread Craig Weinberg
On Feb 23, 12:53 pm, Quentin Anciaux  wrote:
> 2012/2/23 Craig Weinberg 
>
>
>
>
>
>
>
>
>
> > On Feb 23, 9:26 am, Quentin Anciaux  wrote:
>
> > > > I understand that is how you think of it, but I am pointing out your
> > > > unconscious bias. You take consciousness for granted from the start.
>
> > > Because it is... I don't know/care for you, but I'm conscious... the
> > > existence of consciousness from my own POV, is not a discussion.
>
> > The whole thought experiment has to do specifically with testing the
> > existence of consciousness and POV. If we were being honest about the
> > scenario, we would rely only on known comp truths to arrive at the
> > answer. It's cheating to smuggle in human introspection in a test of
> > the nature of human introspection. Let us think only in terms of
> > 'true, doctor'. If comp is valid, there should be no difference
> > between 'true' and 'yes'.
>
> > > > It may seem innocent, but in this case what it does it preclude the
> > > > subjective thesis from being considered fundamental. It's a straw man
>
> > > Read what is a straw man... a straw man is taking the opponent argument
> > and
> > > deforming it to means other things which are obvious to disprove.
>
> > >http://en.wikipedia.org/wiki/Straw_man
>
> > "a superficially similar yet unequivalent proposition (the "straw
> > man")"
>
> > I think that yes doctor makes a straw man of the non-comp position. It
> > argues that we have to choose whether or not we believe in comp, when
> > the non-comp position might be that with comp, we cannot choose to
> > believe in anything in the first place.
>
> > > > of the possibility of unconsciousness.
>
> > > > > >> If you've said yes, then this
> > > > > >> of course entails that you believe that 'free choice' and
> > 'personal
> > > > > >> value' (or the subjective experience of them) can be products of a
> > > > > >> computer program, so there's no contradiction.
>
> > > > > > Right, so why ask the question? Why not just ask 'do you believe a
> > > > > > computer program can be happy'?
>
> > > > > A machine could think (Strong AI thesis) does not entail comp (that
> > we
> > > > > are machine).
>
> > > > I understand that, but we are talking about comp. The thought
> > > > experiment focuses on the brain replacement, but the argument is
> > > > already lost in the initial conditions which presuppose the ability to
> > > > care or tell the difference and have free will to choose.
>
> > > But I have that ability and don't care to discuss it further. I'm
> > > conscious, I'm sorry you're not.
>
> > But you aren't in the thought experiment.
>
> > > > It's subtle,
> > > > but so is the question of consciousness. Nothing whatsoever can be
> > > > left unchallenged, including the capacity to leave something
> > > > unchallenged.
>
> > > > > The fact that a computer program can be happy does not logically
> > > > > entail that we are ourself computer program. may be angels and Gods
> > > > > (non machine) can be happy too. To sum up:
>
> > > > > COMP implies STRONG-AI
>
> > > > > but
>
> > > > > STRONG-AI does not imply COMP.
>
> > > > I understand, but Yes Doctor considers whether STRONG-AI is likely to
> > > > be functionally identical and fully interchangeable with human
> > > > consciousness. It may not say that we are machine, but it says that
> > > > machines can be us
>
> > > It says machines could be conscious as we are without us being machine.
>
> > > ==> strong ai.
>
> > That's what I said. That makes machines more flexible than organically
> > conscious beings. They can be machines or like us, but we can't fully
> > be machines so we are less than machines.
>
> > Either we are machines or we are not... If machines can be conscious and
>
> we're not machines then we are *more* than machines... not less.

How do you figure. If we are A and not B, and machines are A and B,
how does that make us more?

>
>
>
> > > Comp says that we are machine, this entails strong-ai, because if we are
> > > machine, as we are conscious, then of course machine can be conscious...
> > > But if you knew machine could be conscious, that doesn't mean the humans
> > > would be machines... we could be more than that.
>
> > More than that in what way?
>
> We must contain infinite components if we are not machines emulable. So we
> are *more* than machines if machines can be conscious and we're not
> machines.

It only means we are different, not that we are more. If I am a doctor
but not a plumber and a machine is a doctor and a plumber then we are
both doctors. Just because I am not a plumber doesn't mean that I am
more than a doctor. If so, in what way?

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/grou

Re: Support for Panexperientialism

2012-02-23 Thread 1Z


On Feb 23, 3:50 pm, Craig Weinberg  wrote:
> On Feb 23, 9:34 am, 1Z  wrote:
>
>
>
> > >     Well, first of all, where is the “disconnect” and what is it made
> > > of? Specifically, the disconnect that must occur if some parts of
> > > reality are “conscious” while others aren’t.
>
> > DIsconnects exist. Some things are magnetic and others not. And many
> > other examples. What's special about consc? That we don;t understand
> > where or what the disconnect is?
>
> It doesn't make consc special, but he is saying it makes it universal.
> Magnetism is a good example. In our naive perception it seems to us
> that some things are magnetic and others not, but we know that
> actually all atoms have electromagnetic properties.

That's a bit misleading. No atom has ferromagnetic properties,
such properties can only exist in bulk (they are "emergent" in
of the umpteen senses of the word). The elcetromagnetic properites
of atoms are  more
akin to panPROTOexperientialism..

> He is asking what
> thing could make consc special

What special? He doesn't have a ny evidence
that cosnc is special beyond our inability to understand
it in material terms.

>and how we can assume that it isn't
> universal in some sense if we can't point to what that might be.
>
> > But it isn;t at all pbvious that
> > "we don't understand consc" should imply panexperientialism rather
> > than dualism or physicalism or a dozen other options. Almost
> > all the philosophy of mind starts with "we don't understand consc"
>
> That's not what he is saying. His point is that what we do understand
> about physics makes it obvious that consc cannot be understood as some
> special case that is disconnected from the rest of the universe.

That isn't obvious, since there are plenty of physcialists about
consc.
around. And he didn;t mention physics.

> > >And don’t get me started
> > > on the nonsense superstition of “emergent properties” — show me one
> > > “emergent property” that is independent of the conscious observer
> > > coming to the conclusion it is emergent.
>
> > The problem with emergence is that is defined so many ways.
> > For some values of "emergent", emergent properties are
> > trivially demonstrable.
>
> Demonstrable = compels the conclusion that it is emergent to a
> conscious observer.

Epistemologically dependent =/= ontologically dependent.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: The free will function

2012-02-23 Thread Craig Weinberg
On Feb 23, 11:18 am, 1Z  wrote:
> On Feb 21, 10:41 pm, Craig Weinberg  wrote:
>
> > On Feb 21, 5:41 am, 1Z  wrote:
> > > > You are conflating the levels (as Bruno always tells me). The
> > > > simulation has no access to extra-simulatory information, it is a
> > > > complete sub-universe. It's logic is the whole truth which the
> > > > inhabitants can only believe in or disbelieve to the extent which the
> > > > simulation allows them that capacity. If the programmer wants all of
> > > > his avatars to believe with all their hearts that there is a cosmic
> > > > muffin controlling their universe, she has only to set the cosmic
> > > > muffin belief subroutine = true for all her subjects.
>
> > > Read again. I didn't say no sim could have such-and-such
> > > an opinion, I said it would not be true.
>
> > Your standard of truth appears to exclude any simulated content.
>
> No, my definition of truth just doens't change to something
> else when considering simulated contexts.

That's because you aren't taking the simulation seriously. You are
thinking that because you know it's a simulation it means that the
observers within are subject to truths outside of the simulation. In
comp though, it's all simulation. The only truly universal truths are
arithmetic ones. Arithmetic doesn't care if it makes Gods or
Administrators.

>
> > > > Opinions can be right or wrong but the reality is that a programmer
> > > > has omnipotent power over the conditions within the program. She may
> > > > be a programmer, but she can make her simulation subjects think or
> > > > experience whatever she wants them to. She may think of herself as
> > > > their goddess, but she can appear to them as anything or nothing. Her
> > > > power over them remains true and factually real.
>
> > > Same problem.
>
> > Same linguistic literalism.
>
> You say that like its a bad thing.

Not a bad thing, just an inappropriate thing for talking about fantasy
simulations. Pipe fittings maybe, or legal analysis, but you are not
going to find the secrets of consciousness by pointing at a
dictionary.

>
> > >, it may
> > > make false but plausible beliefs in gods likely, but
> > > it cannot make supernatural gods inevitable because
> > > all the ingredients in it are natural or artificial.
>
> > That has almost nothing to do with my argument. You are off in
> > dictionary land. The fact remains that comp, rather than disallowing
> > gods, makes it impossible to know if a Matrix Lord/Administrator has
> > control over aspects of your life.
>
> That is a fact, when expressed properly.

How would you express it?

>
> > > > > No, that is not at all an equivlaent claim. There may
> > > > > be no extension of "magnetic monopole", but it is a meaningful
> > > > > concept.
>
> > > > Supernatural can be meaningful if you want it to be, but in comp all
> > > > it means is meta-programmatic or meta-simulation.
>
> > > Says who? I don't have to accept that the meaning of "supernatural"
> > > has
> > > to exchange to ensure that there are N>0 supernatural entities. I can
> > > stick to the traditional meaning, and regard it as unpopulated and
> > > extensionless.
>
> > What traditional meaning does 'supernatural' have in Comp?
>
> Why assume it has a non tradtional one.

Because comp hasn't been around long enough to have traditions.

>
> >Why do I
> > have to accept your linguistic preferences but you deny me the same
> > right?
>
> Because we can communicate if we stick to accepted meanings,
> and communiction breaks down if you have a free hand to use
> invented meanings.

Just the opposite. Communication breaks down if you tie my hands to
express new ideas in their native terms. Should discussions about
early automotive horsepower been limited to literal horses?

>
> > > > It has no mystical
> > > > charge. It is not what is impossible by the logic of the MWI universe,
> > > > only what is impossible by the programmed logic of the UM-Sub
> > > > Universes. Your argument is based on confusing the levels. If I force
> > > > you to stay within the logic of comp, you have no argument.
>
> > > Apart from ...my argument. As given.
>
> > Your argument now seems to be a word definition argument.
>
> You say that like its a bad thing.

Not a bad thing, just not my thing. I don't do word definitions. I
don't believe in them.

>
> > > > > > That why I said it
> > > > > > from the start. Computational simulations can define anything as 
> > > > > > being
> > > > > > natural or supernatural.
>
> > > > > And they may or may not be right. Opionion does not
> > > > > trump truth.
>
> > > > The opinion of the programmer *is* truth to the programmed.
>
> > > It still isn't truth. As soon as you add a "to" or "for" clause,
> > > you are actually talking about opinion, even if you are using the
> > > *word* truth.
>
> > If I score a point in a game is that the truth that I scored a point?
> > Is anything in a game 'true' in your definition?
>
> Yes It is true that a game is being pla

Re: Yes Doctor circularity

2012-02-23 Thread Quentin Anciaux
2012/2/23 Craig Weinberg 

> On Feb 23, 9:26 am, Quentin Anciaux  wrote:
> >
> > > I understand that is how you think of it, but I am pointing out your
> > > unconscious bias. You take consciousness for granted from the start.
> >
> > Because it is... I don't know/care for you, but I'm conscious... the
> > existence of consciousness from my own POV, is not a discussion.
>
> The whole thought experiment has to do specifically with testing the
> existence of consciousness and POV. If we were being honest about the
> scenario, we would rely only on known comp truths to arrive at the
> answer. It's cheating to smuggle in human introspection in a test of
> the nature of human introspection. Let us think only in terms of
> 'true, doctor'. If comp is valid, there should be no difference
> between 'true' and 'yes'.
>
> >
> > > It may seem innocent, but in this case what it does it preclude the
> > > subjective thesis from being considered fundamental. It's a straw man
> >
> > Read what is a straw man... a straw man is taking the opponent argument
> and
> > deforming it to means other things which are obvious to disprove.
> >
> > http://en.wikipedia.org/wiki/Straw_man
>
> "a superficially similar yet unequivalent proposition (the "straw
> man")"
>
> I think that yes doctor makes a straw man of the non-comp position. It
> argues that we have to choose whether or not we believe in comp, when
> the non-comp position might be that with comp, we cannot choose to
> believe in anything in the first place.
>
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > > of the possibility of unconsciousness.
> >
> > > > >> If you've said yes, then this
> > > > >> of course entails that you believe that 'free choice' and
> 'personal
> > > > >> value' (or the subjective experience of them) can be products of a
> > > > >> computer program, so there's no contradiction.
> >
> > > > > Right, so why ask the question? Why not just ask 'do you believe a
> > > > > computer program can be happy'?
> >
> > > > A machine could think (Strong AI thesis) does not entail comp (that
> we
> > > > are machine).
> >
> > > I understand that, but we are talking about comp. The thought
> > > experiment focuses on the brain replacement, but the argument is
> > > already lost in the initial conditions which presuppose the ability to
> > > care or tell the difference and have free will to choose.
> >
> > But I have that ability and don't care to discuss it further. I'm
> > conscious, I'm sorry you're not.
>
> But you aren't in the thought experiment.
>
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > > It's subtle,
> > > but so is the question of consciousness. Nothing whatsoever can be
> > > left unchallenged, including the capacity to leave something
> > > unchallenged.
> >
> > > > The fact that a computer program can be happy does not logically
> > > > entail that we are ourself computer program. may be angels and Gods
> > > > (non machine) can be happy too. To sum up:
> >
> > > > COMP implies STRONG-AI
> >
> > > > but
> >
> > > > STRONG-AI does not imply COMP.
> >
> > > I understand, but Yes Doctor considers whether STRONG-AI is likely to
> > > be functionally identical and fully interchangeable with human
> > > consciousness. It may not say that we are machine, but it says that
> > > machines can be us
> >
> > It says machines could be conscious as we are without us being machine.
> >
> > ==> strong ai.
>
> That's what I said. That makes machines more flexible than organically
> conscious beings. They can be machines or like us, but we can't fully
> be machines so we are less than machines.
>
> >
> > Comp says that we are machine, this entails strong-ai, because if we are
> > machine, as we are conscious, then of course machine can be conscious...
> > But if you knew machine could be conscious, that doesn't mean the humans
> > would be machines... we could be more than that.
>
> More than that in what way? Different maybe, but Strong AI by
> definition makes machines more than us, because we cannot compete with
> machines at being mechanical but they can compete as equals with us in
> every other way.
>
> >
> > > - which is really even stronger, since we can only
> > > be ourselves but machines apparently can be anything.
> >
> > No read upper.
>
> No read upper.
>
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > > > > When it is posed as a logical
> > > > > consequence instead of a decision, it implicitly privileges the
> > > > > passive voice. We are invited to believe that we have chosen to
> agree
> > > > > to comp because there is a logical argument for it rather than an
> > > > > arbitrary preference committed to in advance. It is persuasion by
> > > > > rhetoric, not by science.
> >
> > > > Nobody tries to advocate comp. We assume it. So if we get a
> > > > contradiction we can abandon it. But we find only weirdness, even
> > > > testable weirdness.
> >
> > > I understand the reason for that though. Comp itself is the rabbit
> > > hole of empiricism. Once you allow it 

Re: Yes Doctor circularity

2012-02-23 Thread Quentin Anciaux
2012/2/23 Craig Weinberg 

> On Feb 23, 9:26 am, Quentin Anciaux  wrote:
> >
> > > I understand that is how you think of it, but I am pointing out your
> > > unconscious bias. You take consciousness for granted from the start.
> >
> > Because it is... I don't know/care for you, but I'm conscious... the
> > existence of consciousness from my own POV, is not a discussion.
>
> The whole thought experiment has to do specifically with testing the
> existence of consciousness and POV. If we were being honest about the
> scenario, we would rely only on known comp truths to arrive at the
> answer. It's cheating to smuggle in human introspection in a test of
> the nature of human introspection. Let us think only in terms of
> 'true, doctor'. If comp is valid, there should be no difference
> between 'true' and 'yes'.
>
> >
> > > It may seem innocent, but in this case what it does it preclude the
> > > subjective thesis from being considered fundamental. It's a straw man
> >
> > Read what is a straw man... a straw man is taking the opponent argument
> and
> > deforming it to means other things which are obvious to disprove.
> >
> > http://en.wikipedia.org/wiki/Straw_man
>
> "a superficially similar yet unequivalent proposition (the "straw
> man")"
>
> I think that yes doctor makes a straw man of the non-comp position. It
> argues that we have to choose whether or not we believe in comp, when
> the non-comp position might be that with comp, we cannot choose to
> believe in anything in the first place.
>
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > > of the possibility of unconsciousness.
> >
> > > > >> If you've said yes, then this
> > > > >> of course entails that you believe that 'free choice' and
> 'personal
> > > > >> value' (or the subjective experience of them) can be products of a
> > > > >> computer program, so there's no contradiction.
> >
> > > > > Right, so why ask the question? Why not just ask 'do you believe a
> > > > > computer program can be happy'?
> >
> > > > A machine could think (Strong AI thesis) does not entail comp (that
> we
> > > > are machine).
> >
> > > I understand that, but we are talking about comp. The thought
> > > experiment focuses on the brain replacement, but the argument is
> > > already lost in the initial conditions which presuppose the ability to
> > > care or tell the difference and have free will to choose.
> >
> > But I have that ability and don't care to discuss it further. I'm
> > conscious, I'm sorry you're not.
>
> But you aren't in the thought experiment.
>
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > > It's subtle,
> > > but so is the question of consciousness. Nothing whatsoever can be
> > > left unchallenged, including the capacity to leave something
> > > unchallenged.
> >
> > > > The fact that a computer program can be happy does not logically
> > > > entail that we are ourself computer program. may be angels and Gods
> > > > (non machine) can be happy too. To sum up:
> >
> > > > COMP implies STRONG-AI
> >
> > > > but
> >
> > > > STRONG-AI does not imply COMP.
> >
> > > I understand, but Yes Doctor considers whether STRONG-AI is likely to
> > > be functionally identical and fully interchangeable with human
> > > consciousness. It may not say that we are machine, but it says that
> > > machines can be us
> >
> > It says machines could be conscious as we are without us being machine.
> >
> > ==> strong ai.
>
> That's what I said. That makes machines more flexible than organically
> conscious beings. They can be machines or like us, but we can't fully
> be machines so we are less than machines.
>
> Either we are machines or we are not... If machines can be conscious and
we're not machines then we are *more* than machines... not less.


>  >
> > Comp says that we are machine, this entails strong-ai, because if we are
> > machine, as we are conscious, then of course machine can be conscious...
> > But if you knew machine could be conscious, that doesn't mean the humans
> > would be machines... we could be more than that.
>
> More than that in what way?


We must contain infinite components if we are not machines emulable. So we
are *more* than machines if machines can be conscious and we're not
machines.


> Different maybe, but Strong AI by
> definition makes machines more than us, because we cannot compete with
> machines at being mechanical but they can compete as equals with us in
> every other way.
>
> >
> > > - which is really even stronger, since we can only
> > > be ourselves but machines apparently can be anything.
> >
> > No read upper.
>
> No read upper.
>
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > > > > When it is posed as a logical
> > > > > consequence instead of a decision, it implicitly privileges the
> > > > > passive voice. We are invited to believe that we have chosen to
> agree
> > > > > to comp because there is a logical argument for it rather than an
> > > > > arbitrary preference committed to in advance. It is persuasion by
> > > > > rhetoric, not by sc

Re: Yes Doctor circularity

2012-02-23 Thread Craig Weinberg
On Feb 23, 9:26 am, Quentin Anciaux  wrote:
>
> > I understand that is how you think of it, but I am pointing out your
> > unconscious bias. You take consciousness for granted from the start.
>
> Because it is... I don't know/care for you, but I'm conscious... the
> existence of consciousness from my own POV, is not a discussion.

The whole thought experiment has to do specifically with testing the
existence of consciousness and POV. If we were being honest about the
scenario, we would rely only on known comp truths to arrive at the
answer. It's cheating to smuggle in human introspection in a test of
the nature of human introspection. Let us think only in terms of
'true, doctor'. If comp is valid, there should be no difference
between 'true' and 'yes'.

>
> > It may seem innocent, but in this case what it does it preclude the
> > subjective thesis from being considered fundamental. It's a straw man
>
> Read what is a straw man... a straw man is taking the opponent argument and
> deforming it to means other things which are obvious to disprove.
>
> http://en.wikipedia.org/wiki/Straw_man

"a superficially similar yet unequivalent proposition (the "straw
man")"

I think that yes doctor makes a straw man of the non-comp position. It
argues that we have to choose whether or not we believe in comp, when
the non-comp position might be that with comp, we cannot choose to
believe in anything in the first place.

>
>
>
>
>
>
>
>
>
> > of the possibility of unconsciousness.
>
> > > >> If you've said yes, then this
> > > >> of course entails that you believe that 'free choice' and 'personal
> > > >> value' (or the subjective experience of them) can be products of a
> > > >> computer program, so there's no contradiction.
>
> > > > Right, so why ask the question? Why not just ask 'do you believe a
> > > > computer program can be happy'?
>
> > > A machine could think (Strong AI thesis) does not entail comp (that we
> > > are machine).
>
> > I understand that, but we are talking about comp. The thought
> > experiment focuses on the brain replacement, but the argument is
> > already lost in the initial conditions which presuppose the ability to
> > care or tell the difference and have free will to choose.
>
> But I have that ability and don't care to discuss it further. I'm
> conscious, I'm sorry you're not.

But you aren't in the thought experiment.

>
>
>
>
>
>
>
>
>
> > It's subtle,
> > but so is the question of consciousness. Nothing whatsoever can be
> > left unchallenged, including the capacity to leave something
> > unchallenged.
>
> > > The fact that a computer program can be happy does not logically
> > > entail that we are ourself computer program. may be angels and Gods
> > > (non machine) can be happy too. To sum up:
>
> > > COMP implies STRONG-AI
>
> > > but
>
> > > STRONG-AI does not imply COMP.
>
> > I understand, but Yes Doctor considers whether STRONG-AI is likely to
> > be functionally identical and fully interchangeable with human
> > consciousness. It may not say that we are machine, but it says that
> > machines can be us
>
> It says machines could be conscious as we are without us being machine.
>
> ==> strong ai.

That's what I said. That makes machines more flexible than organically
conscious beings. They can be machines or like us, but we can't fully
be machines so we are less than machines.

>
> Comp says that we are machine, this entails strong-ai, because if we are
> machine, as we are conscious, then of course machine can be conscious...
> But if you knew machine could be conscious, that doesn't mean the humans
> would be machines... we could be more than that.

More than that in what way? Different maybe, but Strong AI by
definition makes machines more than us, because we cannot compete with
machines at being mechanical but they can compete as equals with us in
every other way.

>
> > - which is really even stronger, since we can only
> > be ourselves but machines apparently can be anything.
>
> No read upper.

No read upper.

>
>
>
>
>
>
>
>
>
>
>
> > > > When it is posed as a logical
> > > > consequence instead of a decision, it implicitly privileges the
> > > > passive voice. We are invited to believe that we have chosen to agree
> > > > to comp because there is a logical argument for it rather than an
> > > > arbitrary preference committed to in advance. It is persuasion by
> > > > rhetoric, not by science.
>
> > > Nobody tries to advocate comp. We assume it. So if we get a
> > > contradiction we can abandon it. But we find only weirdness, even
> > > testable weirdness.
>
> > I understand the reason for that though. Comp itself is the rabbit
> > hole of empiricism. Once you allow it the initial assumption, it can
> > only support itself.
>
> Then you could never show a contradiction for any hypothesis that you
> consider true... and that's simply false, hence you cannot be correct.

You are doing exactly what I just said. You assume initially that all
truths are bound by Aristotelian logic

Re: The free will function

2012-02-23 Thread 1Z


On Feb 21, 10:41 pm, Craig Weinberg  wrote:
> On Feb 21, 5:41 am, 1Z  wrote:

> > > You are conflating the levels (as Bruno always tells me). The
> > > simulation has no access to extra-simulatory information, it is a
> > > complete sub-universe. It's logic is the whole truth which the
> > > inhabitants can only believe in or disbelieve to the extent which the
> > > simulation allows them that capacity. If the programmer wants all of
> > > his avatars to believe with all their hearts that there is a cosmic
> > > muffin controlling their universe, she has only to set the cosmic
> > > muffin belief subroutine = true for all her subjects.
>
> > Read again. I didn't say no sim could have such-and-such
> > an opinion, I said it would not be true.
>
> Your standard of truth appears to exclude any simulated content.

No, my definition of truth just doens't change to something
else when considering simulated contexts.

> > > Opinions can be right or wrong but the reality is that a programmer
> > > has omnipotent power over the conditions within the program. She may
> > > be a programmer, but she can make her simulation subjects think or
> > > experience whatever she wants them to. She may think of herself as
> > > their goddess, but she can appear to them as anything or nothing. Her
> > > power over them remains true and factually real.
>
> > Same problem.
>
> Same linguistic literalism.

You say that like its a bad thing.


> >, it may
> > make false but plausible beliefs in gods likely, but
> > it cannot make supernatural gods inevitable because
> > all the ingredients in it are natural or artificial.
>
> That has almost nothing to do with my argument. You are off in
> dictionary land. The fact remains that comp, rather than disallowing
> gods, makes it impossible to know if a Matrix Lord/Administrator has
> control over aspects of your life.

That is a fact, when expressed properly.

> > > > No, that is not at all an equivlaent claim. There may
> > > > be no extension of "magnetic monopole", but it is a meaningful
> > > > concept.
>
> > > Supernatural can be meaningful if you want it to be, but in comp all
> > > it means is meta-programmatic or meta-simulation.
>
> > Says who? I don't have to accept that the meaning of "supernatural"
> > has
> > to exchange to ensure that there are N>0 supernatural entities. I can
> > stick to the traditional meaning, and regard it as unpopulated and
> > extensionless.
>
> What traditional meaning does 'supernatural' have in Comp?

Why assume it has a non tradtional one.

>Why do I
> have to accept your linguistic preferences but you deny me the same
> right?

Because we can communicate if we stick to accepted meanings,
and communiction breaks down if you have a free hand to use
invented meanings.

> > > It has no mystical
> > > charge. It is not what is impossible by the logic of the MWI universe,
> > > only what is impossible by the programmed logic of the UM-Sub
> > > Universes. Your argument is based on confusing the levels. If I force
> > > you to stay within the logic of comp, you have no argument.
>
> > Apart from ...my argument. As given.
>
> Your argument now seems to be a word definition argument.

You say that like its a bad thing.


> > > > > That why I said it
> > > > > from the start. Computational simulations can define anything as being
> > > > > natural or supernatural.
>
> > > > And they may or may not be right. Opionion does not
> > > > trump truth.
>
> > > The opinion of the programmer *is* truth to the programmed.
>
> > It still isn't truth. As soon as you add a "to" or "for" clause,
> > you are actually talking about opinion, even if you are using the
> > *word* truth.
>
> If I score a point in a game is that the truth that I scored a point?
> Is anything in a game 'true' in your definition?

Yes It is true that a game is being played, not just true-for-the-
layers.
Likewise, the simulation hypothesis requires simulation
to be actually true and not just true-for.

> > > That's
> > > what makes them God.
>
> > Being supernatural makes an entity god. And not just
> > supernatural "to" or "for" someone.
>
> You are aware that there are many definitions for the word god.

You are aware they broadly support what I amsaying, eg
"God is most often conceived of as the supernatural creator and
overseer of the universe. "--WP

> It
> seems like you have one particular one in mind which reads - whatever
> is the opposite of what Craig says it is.

No.


> > > Huh? You could run it on vacuum tubes if you want. Or a stadium full
> > > of people holding up colored cards.
>
> > The matter doesn't matter. What matters is that there is always
> > some matter. I have never seen a simulation run on arithmetic.
>
> I absolutely agree. I'm talking about how comp sees it.

Bruno;s comp.

> This is what
> comp is - functionalism.

Functionalism isn't usually immaterialitic.

>A universe run on formula rather than stuff.
> I disagree with comp. I see stuff and formula as one ha

Re: Yes Doctor circularity

2012-02-23 Thread Craig Weinberg
On Feb 23, 8:53 am, Quentin Anciaux  wrote:
> 2012/2/23 Craig Weinberg 
>
> > On Feb 23, 1:09 am, Stathis Papaioannou  wrote:
>
> > > The "yes doctor" scenario considers the belief that if you are issued
> > > with a computerised brain you will feel just the same. It's equivalent
> > > to the "yes barber" scenario: that if you receive a haircut you will
> > > feel just the same, and not become a zombie or otherwise radically
> > > different being.
>
> > That is one reason why it's a loaded question.
>
> It's not a question... but a starting hypothesis...

"Do you say yes to the doctor?" is a question. It could be a
hypothesis too if you want. I don't see why the difference is
relevant.

> Consider something true
> and then either shows a contradiction or not (if you find a contradiction
> starting from the assumption that the hypothesis is true... then you've
> disproved the hypothesis).

The question relates specifically to consciousness. Empirical logic is
a subordinate category of consciousness. We cannot treat the subject
of consciousness as if it were subordinate to logic without cognitive
bias that privileges reductionism.

>
> But as usual you cannot grasp basic logic. You have to past the hypothesis
> to discuss it and eventually find or not a contradiction... stopping at the
> hypothesis, will let you stuck... you can discuss it for an infinite time
> it won't help.

I can almost make sense of what you are trying to write there. Near as
I can come it's some kind of ad hominem foaming at the mouth about
what assumptions I'm allowed to challenge.

>
> http://en.wikipedia.org/wiki/Mathematical_proof#Proof_by_contradiction

http://en.wikipedia.org/wiki/Bandwagon_effect

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Support for Panexperientialism

2012-02-23 Thread Craig Weinberg
On Feb 23, 9:34 am, 1Z  wrote:
>
> >     Well, first of all, where is the “disconnect” and what is it made
> > of? Specifically, the disconnect that must occur if some parts of
> > reality are “conscious” while others aren’t.
>
> DIsconnects exist. Some things are magnetic and others not. And many
> other examples. What's special about consc? That we don;t understand
> where or what the disconnect is?

It doesn't make consc special, but he is saying it makes it universal.
Magnetism is a good example. In our naive perception it seems to us
that some things are magnetic and others not, but we know that
actually all atoms have electromagnetic properties. He is asking what
thing could make consc special and how we can assume that it isn't
universal in some sense if we can't point to what that might be.

> But it isn;t at all pbvious that
> "we don't understand consc" should imply panexperientialism rather
> than dualism or physicalism or a dozen other options. Almost
> all the philosophy of mind starts with "we don't understand consc"

That's not what he is saying. His point is that what we do understand
about physics makes it obvious that consc cannot be understood as some
special case that is disconnected from the rest of the universe.

>
> >And don’t get me started
> > on the nonsense superstition of “emergent properties” — show me one
> > “emergent property” that is independent of the conscious observer
> > coming to the conclusion it is emergent.
>
> The problem with emergence is that is defined so many ways.
> For some values of "emergent", emergent properties are
> trivially demonstrable.

Demonstrable = compels the conclusion that it is emergent to a
conscious observer.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: COMP theology

2012-02-23 Thread John Clark
On Tue, Feb 21, 2012 at 2:52 PM, Bruno Marchal  wrote:

> Comp makes physics a branch of arithmetic.
>

How in the world can you test if that is right? Even if it's true there is
no reason to think that arithmetic could only generate one type (our type)
of physical reality, there could be a huge number perhaps even a infinite
number of universes all with self consistent but very different physical
laws. And even if there is only one set of laws of physics that a universe
can have (very unlikely in my opinion) you still wouldn't be out of the
woods because that's only half the problem, there is also the matter of
initial conditions. Look at John Conway's cellular automation the "Game of
Life", the rules (equivalent to the laws of physics) by which a Life
universe evolves are deterministic and very very simple, but what any given
Life universe will change into can look radically different, it all depends
on what the initial conditions are.

Incidentally although very simple the rules of Conway's cellular automation
are sufficient to support arbitrary complexity, a few years ago it was
shown that you can build a operational Universal Turing Machine in a life
universe if you have the right initial conditions.

> I provide a constructive proof, (accepting the most admitted classical
> theory of knowledge). [...] This makes comp refutable. Just compare the
> comp-physics and the inferred physics.
>

Just? OK then let's get specific, what law of physics does this thing you
call "comp" uniquely predict and what experiment can we perform to see if
that unique thing really exists?

> for example, we might also conclude that we are in a relative simulation,
> if the difference between comp-physics and empiric-physics belongs to a
> certain type.
>

In a simulation the laws of physics don't even need to be consistent, all
large video games are inconsistent in that they contain errors somewhere,
and yet they still work most of the time. Perhaps the singularity at the
center of a Black Hole is such a inconsistency in the laws of physics,
perhaps it's where the God-programmer screwed up and tried to divide by
zero; but such errors don't bother us much because we're a long way away
from one.

> Here I really do not understand what you say. Why would the falsity of
> comp prevent us to function. I know some people who disbelieve in comp,
> they do function.
>

First of all "Comp" is a non-standard term that is used only on this list
so it's meaning is not exactly nailed down and is a little vague, that said
I know that people may say they don't believe in comp when they are
discussing philosophy with you but when they go to the funeral of a friend
and see that the cadaver is not behaving intelligently they firmly believe
that the cadaver is not conscious. Those same people believe with all their
heart that their consciousness can change the state of a concrete physical
object, for example when they decide to pick up a hammer; thus after they
have that thought they are not astonished to find that the hammer actually
moved. Those same people also believe that a concrete physical system can
profoundly effect something as abstract as consciousness, thus when that
hammer makes contact with one of their fingers they are not surprised that
they consciously feel pain. They are also not surprised that when they take
a powerful drug they're consciousness changes nor are they surprised that
when their consciousness changes their behavior, their interaction with the
outside physical world, also changes. And then the next day when the
conversation turns to philosophy they will tell you again how much they
don't believe in "comp". Oh well, I never said human beings or their ideas
were consistent.

> rationalism might come back to the original quite different conception
> conceived by the Platonist and the neo-platonists, instead of the primary
> matter dogma common to Chirstians and Atheists
>

I don't know what you mean by the "primary matter dogma" but apparently its
something both Christians and I believe in. I do know that matter by itself
is rather dull, the interesting thing is the way matter is organized, in
other words information. I know two other things, information is as close
as you can get to the traditional concept of the soul and still remain
within the scientific method, and information is the only thing I or a
Christian or anybody else can understand.

> You might not let your functioning be so dependent of your beliefs. It is
> not good for the health.
>

I doubt if you could function one bit better than I could if you really
believed you were the only conscious being in the universe.

> with comp living bodies are not conscious
>

Then I don't have a clue what this bizarre non-standard term "comp" means
and I have serious doubts that anybody else does either.

> They just make able for a person to manifest her consciousness relatively
> to some collection of universal numbers (the neighbors person, and
> universal ent

Re: Support for Panexperientialism

2012-02-23 Thread 1Z


On Feb 22, 1:10 pm, Craig Weinberg  wrote:
> Could a rock have consciousness? Good answer from someone on 
> Quora:http://www.quora.com/Could-a-rock-have-consciousness
>
> "    Yes, obviously.
>
>     Why obviously?
>
>     Well, first of all, where is the “disconnect” and what is it made
> of? Specifically, the disconnect that must occur if some parts of
> reality are “conscious” while others aren’t.

DIsconnects exist. Some things are magnetic and others not. And many
other examples. What's special about consc? That we don;t understand
where or what the disconnect is? But it isn;t at all pbvious that
"we don't understand consc" should imply panexperientialism rather
than dualism or physicalism or a dozen other options. Almost
all the philosophy of mind starts with "we don't understand consc"

>And don’t get me started
> on the nonsense superstition of “emergent properties” — show me one
> “emergent property” that is independent of the conscious observer
> coming to the conclusion it is emergent.


The problem with emergence is that is defined so many ways.
For some values of "emergent", emergent properties are
trivially demonstrable.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes Doctor circularity

2012-02-23 Thread Quentin Anciaux
2012/2/23 Craig Weinberg 

> On Feb 23, 4:32 am, Bruno Marchal  wrote:
> > On 23 Feb 2012, at 06:42, Craig Weinberg wrote:
> >
> > > On Feb 22, 6:10 pm, Pierz  wrote:
> > >> 'Yes doctor' is merely an establishment of the assumption of comp.
> > >> Saying yes means you are a computationalist. If you say no the you
> > >> are
> > >> not one, and one cannot proceed with the argument that follows -
> > >> though then the onus will be on you to explain *why* you don't
> > >> believe
> > >> a computer can substitute for a brain.
> >
> > > That's what is circular. The question cheats by using the notion of a
> > > bet to put the onus on us to take comp for granted in the first place
> > > when there is no reason to presume that bets can exist in a universe
> > > where comp is true. It's a loaded question, but in a sneaky way. It is
> > > to say 'if you don't think the computer is happy, that's fine, but you
> > > have to explain why'.
> >
> > It is circular only if we said that "saying yes" was an argument for
> > comp, which nobody claims.
>
> I'm not saying comp is claimed explicitly. My point is that the
> structure of the thought experiment implicitly assumes comp from the
> start. It seats you at the Blackjack table with money and then asks if
> you want to play.
>
> > I agree with Stathis and Pierz comment.
> >
> > You do seem to have some difficulties in the understanding of what is
> > an assumption or an hypothesis.
>
> From my perspective it seems that others have difficulties
> understanding when I am seeing through their assumptions.
>
> >
> > We defend comp against non valid refutation, this does not mean that
> > we conclude that comp is true. it is our working hypothesis.
>
> I understand that is how you think of it, but I am pointing out your
> unconscious bias. You take consciousness for granted from the start.
>

Because it is... I don't know/care for you, but I'm conscious... the
existence of consciousness from my own POV, is not a discussion.


> It may seem innocent, but in this case what it does it preclude the
> subjective thesis from being considered fundamental. It's a straw man
>

Read what is a straw man... a straw man is taking the opponent argument and
deforming it to means other things which are obvious to disprove.

http://en.wikipedia.org/wiki/Straw_man


> of the possibility of unconsciousness.
>
> >
> >
> >
> > >> If you've said yes, then this
> > >> of course entails that you believe that 'free choice' and 'personal
> > >> value' (or the subjective experience of them) can be products of a
> > >> computer program, so there's no contradiction.
> >
> > > Right, so why ask the question? Why not just ask 'do you believe a
> > > computer program can be happy'?
> >
> > A machine could think (Strong AI thesis) does not entail comp (that we
> > are machine).
>
> I understand that, but we are talking about comp. The thought
> experiment focuses on the brain replacement, but the argument is
> already lost in the initial conditions which presuppose the ability to
> care or tell the difference and have free will to choose.


But I have that ability and don't care to discuss it further. I'm
conscious, I'm sorry you're not.


> It's subtle,
> but so is the question of consciousness. Nothing whatsoever can be
> left unchallenged, including the capacity to leave something
> unchallenged.
>
> > The fact that a computer program can be happy does not logically
> > entail that we are ourself computer program. may be angels and Gods
> > (non machine) can be happy too. To sum up:
> >
> > COMP implies STRONG-AI
> >
> > but
> >
> > STRONG-AI does not imply COMP.
>
> I understand, but Yes Doctor considers whether STRONG-AI is likely to
> be functionally identical and fully interchangeable with human
> consciousness. It may not say that we are machine, but it says that
> machines can be us


It says machines could be conscious as we are without us being machine.

==> strong ai.

Comp says that we are machine, this entails strong-ai, because if we are
machine, as we are conscious, then of course machine can be conscious...
But if you knew machine could be conscious, that doesn't mean the humans
would be machines... we could be more than that.


> - which is really even stronger, since we can only
> be ourselves but machines apparently can be anything.
>

No read upper.


>
> >
> > > When it is posed as a logical
> > > consequence instead of a decision, it implicitly privileges the
> > > passive voice. We are invited to believe that we have chosen to agree
> > > to comp because there is a logical argument for it rather than an
> > > arbitrary preference committed to in advance. It is persuasion by
> > > rhetoric, not by science.
> >
> > Nobody tries to advocate comp. We assume it. So if we get a
> > contradiction we can abandon it. But we find only weirdness, even
> > testable weirdness.
>
> I understand the reason for that though. Comp itself is the rabbit
> hole of empiricism. Once you allow it the initia

Re: The free will function

2012-02-23 Thread marty684






From: Bruno Marchal 
To: everything-list@googlegroups.com
Sent: Thu, February 23, 2012 4:48:10 AM
Subject: Re: The free will function



On 22 Feb 2012, at 18:17, marty684 wrote:

Bruno,
> If everything is made of numbers (as in COMP) 

Nothing is "made of". Everything appears in the mind of Universal numbers 
relatively to universal numbers, with hopefully reasonable relative statistics.

Think about a dream. If you dream that you drink coffee, you can understand 
that 
such a "coffee" is not made of anything. The experience of coffee is due to 
some 
computation in your brain. With the big picture apparently implied by comp, 
even 
the brain is like that dreamed coffee: it is not made of anything. It is only 
locally made of things due to the infinitely many computations generating your 
actual state.

The "matrix" metaphore, or the Galouye "simulacron" metaphore is not so bad. 
And we don't need more than the numbers + addition and multiplication to get an 
initial dreaming immaterial machinery.

Thanks for this vivid clarification. But...


Read UDA. You might understand that if we are machine (numbers relative to 
other 
numbers), then we cannot know which machine we are, nor which computations 
supports us, among an infinity of them.Everything observable becomes 
probabilistic. The probability bears on the infinitely many computations going 
throughyour actual state (that's why they are relative).

Why should probability depend on us; on what we 'know or cannot know' ? On what 
is 'observable' to us? It seems to me that you are defining probability by that 
which is relative to our 'actual states'. Why can't we inhabit a seemingly 
probablistic part of an infinite, determined universe ? 







   (If you've been over this before, please refer me to the relevant posts, 
thanks.)  marty a.
Read UDA, and ask question for each step, in case of problem, so we might 
single 
out the precise point where you don't succeed to grasp why comp put 
probabilities, or credibilities, uncertainties,  in front of everything. UDA1-7 
is enough to get this. UDA-8 is needed only for the more subtle immateriality 
point implied by computationalism.

My attempts to read UDA were never successful. Sorry. 

Bruno


http://iridia.ulb.ac.be/~marchal/


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes Doctor circularity

2012-02-23 Thread Craig Weinberg
On Feb 23, 4:32 am, Bruno Marchal  wrote:
> On 23 Feb 2012, at 06:42, Craig Weinberg wrote:
>
> > On Feb 22, 6:10 pm, Pierz  wrote:
> >> 'Yes doctor' is merely an establishment of the assumption of comp.
> >> Saying yes means you are a computationalist. If you say no the you
> >> are
> >> not one, and one cannot proceed with the argument that follows -
> >> though then the onus will be on you to explain *why* you don't
> >> believe
> >> a computer can substitute for a brain.
>
> > That's what is circular. The question cheats by using the notion of a
> > bet to put the onus on us to take comp for granted in the first place
> > when there is no reason to presume that bets can exist in a universe
> > where comp is true. It's a loaded question, but in a sneaky way. It is
> > to say 'if you don't think the computer is happy, that's fine, but you
> > have to explain why'.
>
> It is circular only if we said that "saying yes" was an argument for
> comp, which nobody claims.

I'm not saying comp is claimed explicitly. My point is that the
structure of the thought experiment implicitly assumes comp from the
start. It seats you at the Blackjack table with money and then asks if
you want to play.

> I agree with Stathis and Pierz comment.
>
> You do seem to have some difficulties in the understanding of what is
> an assumption or an hypothesis.

>From my perspective it seems that others have difficulties
understanding when I am seeing through their assumptions.

>
> We defend comp against non valid refutation, this does not mean that
> we conclude that comp is true. it is our working hypothesis.

I understand that is how you think of it, but I am pointing out your
unconscious bias. You take consciousness for granted from the start.
It may seem innocent, but in this case what it does it preclude the
subjective thesis from being considered fundamental. It's a straw man
of the possibility of unconsciousness.

>
>
>
> >> If you've said yes, then this
> >> of course entails that you believe that 'free choice' and 'personal
> >> value' (or the subjective experience of them) can be products of a
> >> computer program, so there's no contradiction.
>
> > Right, so why ask the question? Why not just ask 'do you believe a
> > computer program can be happy'?
>
> A machine could think (Strong AI thesis) does not entail comp (that we
> are machine).

I understand that, but we are talking about comp. The thought
experiment focuses on the brain replacement, but the argument is
already lost in the initial conditions which presuppose the ability to
care or tell the difference and have free will to choose. It's subtle,
but so is the question of consciousness. Nothing whatsoever can be
left unchallenged, including the capacity to leave something
unchallenged.

> The fact that a computer program can be happy does not logically
> entail that we are ourself computer program. may be angels and Gods
> (non machine) can be happy too. To sum up:
>
> COMP implies STRONG-AI
>
> but
>
> STRONG-AI does not imply COMP.

I understand, but Yes Doctor considers whether STRONG-AI is likely to
be functionally identical and fully interchangeable with human
consciousness. It may not say that we are machine, but it says that
machines can be us - which is really even stronger, since we can only
be ourselves but machines apparently can be anything.

>
> > When it is posed as a logical
> > consequence instead of a decision, it implicitly privileges the
> > passive voice. We are invited to believe that we have chosen to agree
> > to comp because there is a logical argument for it rather than an
> > arbitrary preference committed to in advance. It is persuasion by
> > rhetoric, not by science.
>
> Nobody tries to advocate comp. We assume it. So if we get a
> contradiction we can abandon it. But we find only weirdness, even
> testable weirdness.

I understand the reason for that though. Comp itself is the rabbit
hole of empiricism. Once you allow it the initial assumption, it can
only support itself. Comp has no ability to contradict itself, but the
universe does.

>
>
>
> >> In fact the circularity
> >> is in your reasoning. You are merely reasserting your assumption that
> >> choice and personal value must be non-comp,
>
> > No, the scenario asserts that by relying on the device of choice and
> > personal value as the engine of the thought experiment. My objection
> > is not based on any prejudice against comp I may have, it is based on
> > the prejudice of the way the question is posed.
>
> The question is used to give a quasi-operational definition of
> computationalism, by its acceptance of a digital brain transplant.
> This makes possible to reason without solving the hard task to define
> consciousness or thinking. This belongs to the axiomatic method
> usually favored by mathematicians.

I know. What I'm saying is that the axiomatic method precludes any
useful examination of consciousness axiomatically. It's a screwdriver
instead of a hot meal.

>
>
>
> >

Re: Yes Doctor circularity

2012-02-23 Thread Quentin Anciaux
2012/2/23 Craig Weinberg 

> On Feb 23, 1:09 am, Stathis Papaioannou  wrote:
> >
> > The "yes doctor" scenario considers the belief that if you are issued
> > with a computerised brain you will feel just the same. It's equivalent
> > to the "yes barber" scenario: that if you receive a haircut you will
> > feel just the same, and not become a zombie or otherwise radically
> > different being.
>
> That is one reason why it's a loaded question.


It's not a question... but a starting hypothesis... Consider something true
and then either shows a contradiction or not (if you find a contradiction
starting from the assumption that the hypothesis is true... then you've
disproved the hypothesis).

But as usual you cannot grasp basic logic. You have to past the hypothesis
to discuss it and eventually find or not a contradiction... stopping at the
hypothesis, will let you stuck... you can discuss it for an infinite time
it won't help.

http://en.wikipedia.org/wiki/Mathematical_proof#Proof_by_contradiction


> It equates having your
> brain surgically replaced with getting a haircut. It's the way that it
> does it that's fishy though. It's more equivalent to saying 'In a
> world where having haircuts is ordinary, are you afraid of having a
> haircut.'. or more accurately, 'In a world where arithmetic is true,
> are arithmetic truths your truths.'
>
> Craig
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To post to this group, send email to everything-list@googlegroups.com.
> To unsubscribe from this group, send email to
> everything-list+unsubscr...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/everything-list?hl=en.
>
>


-- 
All those moments will be lost in time, like tears in rain.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes Doctor circularity

2012-02-23 Thread Craig Weinberg
On Feb 23, 1:09 am, Stathis Papaioannou  wrote:
>
> The "yes doctor" scenario considers the belief that if you are issued
> with a computerised brain you will feel just the same. It's equivalent
> to the "yes barber" scenario: that if you receive a haircut you will
> feel just the same, and not become a zombie or otherwise radically
> different being.

That is one reason why it's a loaded question. It equates having your
brain surgically replaced with getting a haircut. It's the way that it
does it that's fishy though. It's more equivalent to saying 'In a
world where having haircuts is ordinary, are you afraid of having a
haircut.'. or more accurately, 'In a world where arithmetic is true,
are arithmetic truths your truths.'

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: The free will function

2012-02-23 Thread Bruno Marchal


On 22 Feb 2012, at 18:17, marty684 wrote:


Bruno,

 If everything is made of numbers (as in COMP)



Nothing is "made of". Everything appears in the mind of Universal  
numbers relatively to universal numbers, with hopefully reasonable  
relative statistics.


Think about a dream. If you dream that you drink coffee, you can  
understand that such a "coffee" is not made of anything. The  
experience of coffee is due to some computation in your brain. With  
the big picture apparently implied by comp, even the brain is like  
that dreamed coffee: it is not made of anything. It is only locally  
made of things due to the infinitely many computations generating your  
actual state.


The "matrix" metaphore, or the Galouye "simulacron" metaphore is not  
so bad.
And we don't need more than the numbers + addition and multiplication  
to get an initial dreaming immaterial machinery.




which can express states to an arbitrary degree of precision, is  
there any room for chance or probability?


There is ONLY room for probability. The whole physics is made into a  
probability calculus.






And if so, how do they arise?

Read UDA. You might understand that if we are machine (numbers  
relative to other numbers), then we cannot know which machine we are,  
nor which computations supports us, among an infinity of them.  
Everything observable becomes probabilistic. The probability bears on  
the infinitely many computations going through your 'actual' state  
(that's why they are relative).





   (If you've been over this before, please refer me to the relevant  
posts, thanks.)  marty a.


Read UDA, and ask question for each step, in case of problem, so we  
might single out the precise point where you don't succeed to grasp  
why comp put probabilities, or credibilities, uncertainties,  in front  
of everything. UDA1-7 is enough to get this. UDA-8 is needed only for  
the more subtle immateriality point implied by computationalism.


Bruno


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes Doctor circularity

2012-02-23 Thread Bruno Marchal


On 23 Feb 2012, at 06:42, Craig Weinberg wrote:


On Feb 22, 6:10 pm, Pierz  wrote:

'Yes doctor' is merely an establishment of the assumption of comp.
Saying yes means you are a computationalist. If you say no the you  
are

not one, and one cannot proceed with the argument that follows -
though then the onus will be on you to explain *why* you don't  
believe

a computer can substitute for a brain.


That's what is circular. The question cheats by using the notion of a
bet to put the onus on us to take comp for granted in the first place
when there is no reason to presume that bets can exist in a universe
where comp is true. It's a loaded question, but in a sneaky way. It is
to say 'if you don't think the computer is happy, that's fine, but you
have to explain why'.


It is circular only if we said that "saying yes" was an argument for  
comp, which nobody claims.

I agree with Stathis and Pierz comment.

You do seem to have some difficulties in the understanding of what is  
an assumption or an hypothesis.


We defend comp against non valid refutation, this does not mean that  
we conclude that comp is true. it is our working hypothesis.







If you've said yes, then this
of course entails that you believe that 'free choice' and 'personal
value' (or the subjective experience of them) can be products of a
computer program, so there's no contradiction.


Right, so why ask the question? Why not just ask 'do you believe a
computer program can be happy'?


A machine could think (Strong AI thesis) does not entail comp (that we  
are machine).
The fact that a computer program can be happy does not logically  
entail that we are ourself computer program. may be angels and Gods  
(non machine) can be happy too. To sum up:


COMP implies STRONG-AI

but

STRONG-AI does not imply COMP.






When it is posed as a logical
consequence instead of a decision, it implicitly privileges the
passive voice. We are invited to believe that we have chosen to agree
to comp because there is a logical argument for it rather than an
arbitrary preference committed to in advance. It is persuasion by
rhetoric, not by science.


Nobody tries to advocate comp. We assume it. So if we get a  
contradiction we can abandon it. But we find only weirdness, even  
testable weirdness.







In fact the circularity
is in your reasoning. You are merely reasserting your assumption that
choice and personal value must be non-comp,


No, the scenario asserts that by relying on the device of choice and
personal value as the engine of the thought experiment. My objection
is not based on any prejudice against comp I may have, it is based on
the prejudice of the way the question is posed.


The question is used to give a quasi-operational definition of  
computationalism, by its acceptance of a digital brain transplant.  
This makes possible to reason without solving the hard task to define  
consciousness or thinking. This belongs to the axiomatic method  
usually favored by mathematicians.







but that is exactly what
is at issue in the yes doctor question. That is precisely what we're
betting on.


If we are betting on anything then we are in a universe which has not
been proved to be supported by comp alone.


That is exactly what we try to make precise enough so that it can be  
tested. Up to now, comp is 'saved' by the quantum weirdness it implies  
(MW, indeterminacy, non locality, non-cloning), without mentioning the  
candidate for consciousness, qualia, ... that is, the many things that  
a machine can produce as 1p-true without any 3p-means to justify them.


Bruno

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: UD* and consciousness

2012-02-23 Thread Bruno Marchal


On 22 Feb 2012, at 23:07, Terren Suydam wrote:

On Wed, Feb 22, 2012 at 12:29 PM, Bruno Marchal   
wrote:


On 22 Feb 2012, at 15:49, Terren Suydam wrote:

Hey Bruno,

I seem to remember reading a while back that you were saying that  
the
1p consciousness arises necessarily from the many paths in the UD.  
I'm

glad to clear up my misunderstanding.



OK. What happens, if there is no flaw in the UDA-MGA, is that your  
futures
can only be determined by the statistics bearing on all  
computations going

through your state.

The 1p nature of that consciousness will rely on the logic of  
(machine)
knowledge (or other modalities), which put some structure on the  
set of

accessible computational states.

Sorry for being unclear,  and for the many misspellings, and other
grammatical tenses atrocities.

The problem is also related to the difficulty of the subject, which  
is
necessarily counter-intuitive (in the comp theory), so that we have  
some
trouble in using the natural language, which relies on natural  
"intuitive

prejudices".

In fact I can understand why it might look like I was saying that  
the 1p
needs the many computations. The reality is that one is enough, but  
the
others computations, 1-p undistinguishable, are there to, and even  
for a
slight interval of consciousness, we must take into account that we  
are in

all of them, for the correct statistics. So the 1p is attached to an
infinity of computation, once you "attach" it to just one  
computation.


Indeed, it is very counter intuitive and full of subtleties. I have
been lurking for a few years now and I am finding that only by
engaging with you and others on the list do I begin to comprehend the
subtleties.


Thanks for saying.






However I don't understand how Mary could have anything but a single
continuation given the determinism of the sim. How could a
counterfactual arise in this thought experiment? Can you give a
"concrete" example?



You should really find this by yourself, honestly. It is the only  
way to be

really convinced. Normally this follows from the reasoning.
Please ask if you don't find your "error".
Oh! I see Quentin found it.

Your mistake consists in believing that when you simulate your  
friend Mary
in the deterministic sim, completely closed, as you say, you have  
succeeded

to prevent Mary, from her own pov, to "escape"  your simulation. Her
1-indeterminacy remains unchanged, and bears on the many  
computations,

existing by the + and * laws, or in the UD.

The counterfactuals, and the indeterminacy comes from the existence  
of an
infinity of computations generating Mary's state. Your  
deterministic sim can

be runned a million times, it will not change Mary's indeterminacy,
relatively to the infinities of diverging (infinite) computations  
going

through her 1-state.

You might also reason like that. The consciousness of Mary is only in
Platonia. We have abandoned the idea that consciousness is related  
to any

singular physical activity.


Here was the "aha!" moment. I get it now. Thanks to you and Quentin.
Even though I am well aware of the consequences of MGA, I was focusing
on the "physical activity" of the simulation because "I" was running
it.


Yes, that's why reasoning and logic is important. It is understandable  
that evolution could not have prepared us to the possibly true 'big  
picture", nor for fundamental science, nor for quickly developing  
technologies. So it needs some effort to abstract us from build-in  
prejudices. Nature, a bit like bandits, is opportunist. At the same  
time we don't have to brush away that intuition, because it is real,  
and it has succeeded to bring us here and now, and that has to be  
respected somehow too.
Note that the math confirms this misunderstanding between the heart/ 
intuition/first-person/right-brain (modeled by Bp & p) and the  
scientist/reasoner/left-brain (modeled by Bp). The tension appears  
right at the start, when a self-aware substructure begin to  
differentiate itself from its neighborhood.






The fascinating thing for me is, if instead of a scan of Mary, we run
an AGI that embodies a cognitive architecture that satisfies a theory
of consciousness (the kind of theory that explains why a particular UM
is conscious) so that if we assume the theory, it entails that the AGI
is conscious. The AGI will therefore have 1p indeterminacy even if the
sim is deterministic, for the same reason Mary does, because there are
an infinity of divergent computational paths that go through the AGI's
1p state in any given moment. Trippy!


Yeah. "Trippy" is the word.
Many people reacts to comp in a strikingly similar way than other  
numerous people react to the very potent Salvia divinorum  
hallucinogen. People needs a very sincere interest in the fundamentals  
to appreciate the comp consequence, or to appreciate potent  
dissociative hallucinogen.
I should not insist on this. Some would conclude we should make comp  
illegal. Like "thinking