Re: Adam and Eve’s Anthropic Superpowers

2018-11-24 Thread John Clark
On Sat, Nov 24, 2018 at 5:01 PM Brent Meeker  wrote:

*> The best intuition pump to solve the Monte Hall problem is to imagine
> that there are 100 doors and Monte opens all the doors except the one you
> chose and one otherdo you switch?*


3 doors will do. If you follow the switch strategy the only way you would
end up losing is if your original guess was correct, and there was only one
chance in 3 of that, so if you switch you have 2 chances in 3 of winning.

John K Clark


>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Towards Conscious AI Systems (a symposium at the AAAI Stanford Spring Symposium 2019)

2018-11-24 Thread John Clark
On Sat, Nov 24, 2018 at 6:10 PM Brent Meeker  wrote:

*>The question is whether the AI will ever infer it is not conscious. *


Perhaps reverse solipsism is true, maybe what I think of as consciousness
is just a very pale reflection of the true glorious feeling of
consciousness that you and everybody else except me feels. Comparing my
consciousness to yours may be like comparing a firefly to a supernova,
maybe I'm the only human that is not conscious. Or maybe its the other way
around and regular old solipsism is true, or maybe we're equally conscious;
the only thing I know for sure is I'll never know.

And that's why discussions about Artificial Intelligence are so much more
interesting than discussions about Artificial Consciousness.

 John K Clark




>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Towards Conscious AI Systems (a symposium at the AAAI Stanford Spring Symposium 2019)

2018-11-24 Thread John Clark
On Sat, Nov 24, 2018 at 3:14 PM Philip Thrift  wrote:

> I think one problem for us is as artificial/synthetic intelligence
> technology advances: When (if ever) do these entities get "rights"?
>

There is no point in pondering that because the question is moot. The big
unknown is not what rights we'll end up giving to machines but what rights
(if any) machines will end up giving us.

John K Clark





>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Towards Conscious AI Systems (a symposium at the AAAI Stanford Spring Symposium 2019)

2018-11-24 Thread Brent Meeker



On 11/24/2018 5:39 AM, John Clark wrote:


On Fri, Nov 23, 2018 at 1:10 PM Philip Thrift > wrote:


/> Some in AI will say if something is just informationally
intelligent (or pseudo-intelligent) but not experientially
intelligent then it will not ever be remarkably creative - in
literature, music, painting, or even science./


Apparently being remarkably creative is not required to be supremely 
good at Chess or GO or solving equations because pseudo-intelligence 
will beat true-intelligence at those things every time. The goal posts 
keep moving, true intelligence is whatever computers aren't good at. Yet.


> And it will not be conscious,


My problem is if the AI is smarter than me it will outsmart me, but if 
the AI isn't conscious that's the computers problem not mine. And 
besides, I'll never know if the AI is conscious or not just as I'll 
never know if you are.


The question is whether the AI will ever infer it is not conscious. I 
think Bruno correctly points out that this would be a contradiction. If 
it can contemplate the question, it's conscious even though it can't 
prove it.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Adam and Eve’s Anthropic Superpowers

2018-11-24 Thread Brent Meeker



On 11/24/2018 1:53 AM, Philip Thrift wrote:



On Friday, November 23, 2018 at 4:11:26 PM UTC-6, Mason Green wrote:

Hi everyone,

I found an interesting blog post that attempts to refute the
Doomsday Argument. It suggests that different worlds ought to be
weighted by the number of people in them, so that you should be
more likely to find yourself in a world where there will be many
humans, as opposed to just a few. This would cancel out the
unlikeliness of finding yourself among the first humans in such a
world.

I’m curious as to what the contributors here think. (I’m new here,
I found out about this list through Russell’s Theory of Nothing
book).

https://risingentropy.com/2018/09/06/adam-and-eves-anthropic-superpowers/



-Mason



Without examining the theoretical details of this (or any) 
probabilistic argument (including Bayesian ones), one general approach 
is this: The theory may all be correct of course (given accepted 
assumptions), but it's ultimately convincing when results are compared 
to Monte Carlo computer experiments. (If you don't like don't "trust" 
your software's random numbers, then you can get some from [ 
https://www.fourmilab.ch/hotbits/secure_generate.html ]).


Say in the case of "In front of you is a jar. This jar contains either 
10 balls or 100 balls. The balls are numbered in order from 1 to 
either 10 or 100." Then you you write a program that randomly creates 
either a 10ball-jar with probability 0.50 (or any p) or a 100ball-jar 
with probability 0.50 (or 1-p) and then pick a ball at random. You run 
this 10,000 times (or whatever) and just get statistics.


You can do this for the Monte Hall problem - which has the irony that 
Monte Carlo "solves" the Monte Hall problem!


The best intuition pump to solve the Monte Hall problem is to imagine 
that there are 100 doors and Monte opens all the doors except the one 
you chose and one otherdo you switch?


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Towards Conscious AI Systems (a symposium at the AAAI Stanford Spring Symposium 2019)

2018-11-24 Thread Philip Thrift


On Saturday, November 24, 2018 at 7:40:26 AM UTC-6, John Clark wrote:
>
>
> On Fri, Nov 23, 2018 at 1:10 PM Philip Thrift  > wrote:
>
> *> Some in AI will say if something is just informationally intelligent 
>> (or pseudo-intelligent) but not experientially intelligent then it will not 
>> ever be remarkably creative - in literature, music, painting, or even 
>> science.*
>>
>
> Apparently being remarkably creative is not required to be supremely good 
> at Chess or GO or solving equations because pseudo-intelligence will beat 
> true-intelligence at those things every time. The goal posts keep moving, 
> true intelligence is whatever computers aren't good at. Yet. 
>  
>
>> > And it will not be conscious,
>>
>
> My problem is if the AI is smarter than me it will outsmart me, but if the 
> AI isn't conscious that's the computers problem not mine. And besides, 
> I'll never know if the AI is conscious or not just as I'll never know if 
> you are.
>  
>
>> >*as all humans are.*
>>
>
> Most humans are NOT remarkably creative in literature, music, painting or 
> science; so why do you think all humans are conscious?
>
> John K Clark
>
>  
>

I just happened to turn on COMET TV channel showing Dr. Goldfoot (Vincent 
Price) movies (Dr. Goldfoot and the Bikini Machine, Dr. Goldfoot and the 
Girl Bombs).

*Price plays the titular mad scientist who ... builds a gang of female 
robots.*

I think one problem for us is as artificial/synthetic intelligence 
technology advances: When (if ever) do these entities get "rights"?

- pt

 



-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Towards Conscious AI Systems (a symposium at the AAAI Stanford Spring Symposium 2019)

2018-11-24 Thread John Clark
On Sat, Nov 24, 2018 at 11:40 AM Quentin Anciaux  wrote:

*> Strangely you're not as hard with yourself when you advertise
> manyworld... Just show us a parallel universe then... Until you apply to
> your own beliefs your own methods, It will just be dismissive BS.*
>

I can't show you a parallel universe and manyworlds may indeed be BS, but I
can show you weird stuff at the quantum level, manyworlds can explain it
but other things can too. I think manyworlds is slightly less weird than
the other explanations but I admit that's just my subjective opinion and I
don't really know if manyworlds, Copenhagen or pilot waves is correct,
perhaps none of them are. Right now manyworld fits the facts as well as any
other quantum interpretation, if a new one comes along that fits the facts
better I'll abandon manyworlds in a heartbeat.

 John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Measuring a system in a superposition of states vs in a mixed state.

2018-11-24 Thread John Clark
On Sat, Nov 24, 2018 at 8:08 AM  wrote:

*> Why do you make this gratuitous point. and on a regular basis, when you
> habitually indicate which theories you like or don't like?*


As I've said before I have no loyalty, if I find a idea doesn't fit the
facts then regardless of my previous infatuation I abandon it, and if it
does fit the facts then I learn to like it and continue to like it until I
find an idea that fits the facts even better.

 John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Towards Conscious AI Systems (a symposium at the AAAI Stanford Spring Symposium 2019)

2018-11-24 Thread Quentin Anciaux
Le sam. 24 nov. 2018 17:28, John Clark  a écrit :

> On Fri, Nov 23, 2018 at 8:31 AM Bruno Marchal  wrote:
>
>
>> > *in a precise context, when doing science/mathematics, it is useful to
>> have precise mathematical definition.*
>>
>
> Sure definitions can be useful but they never cause things to pop into
> existence or can tell you anything about the nature of science or
> mathematics, all they tell you is what the sound some human beings make
> with their mouth or the squiggles they draw with their hands represent,
> something that may or may not be part of reality.
>
> *> You define computation through an ontological commitment.*
>>
>
> My commitment is with the scientific method, so when you make outlandish
> claims (*matter is not needed to make calculations Robison arithmetic
> alone can do so,  Kleene’s predicate T(x, y, z) can encode information*)
> I ask you to actually do so.
>

Strangely you're not as hard with yourself when you advertise manyworld...
Just show us a parallel universe then... Until you apply to your own
beliefs your own methods, It will just be dismissive BS.

I don't ask you to tell me about it, anybody can spin a tale in the English
> language or the Mathematical language, I ask you to actually make a
> calculation or encode some information without using matter that obeys the
> laws of physics. I don't want more squiggles made of ink I want you to
> perform a experiment that can be repeated.  I'm not being unreasonable in
> my request, I'm just asking you to be scientific.  If you can successfully
> do all that I'll do a 180, my opinion of your work will change radically
> because I have no loyalty or sentimentality, if a idea doesn't work I
> reject it if it does work I embrace it until I find something that works
> even better.
>
>
>> > *That is not the standard way to proceed in this field,*
>>
>
> True, that's not the way things are done in the Junk Science field, Voodoo
> priests would not approve at all.
>
> >>Definitions do not change reality and you're never going to discover
>>> anything new just by making definitions.
>>
>>
>> > *Any formal or mathematical definition will do,*
>>
>
> Will do what? Change reality?
>
> *>That all computations are executed in arithmetic is just a standard fact
>> knows since 1931-1936. *
>>
>
> And it has also been know that arithmetic can only be performed by matter
> that obeys the laws of physics.
>
>
>> > *That simply cannot work, unless you are right about the non existence
>> of the first person indeterminacy, *
>>
>
> First person indeterminacy? Oh yes, the idea that you can't always be
> certain what will happen next. I believe that monumental discovery was made
> by the great thinker and philosopher Og The Caveman.
>
>
>> >>We've observed experimentally that a change in matter changes
>>> consciousness and a change in consciousness changes matter, I don't see how
>>> you could get better evidence than that indicating matter and consciousness
>>> are related.
>>
>>
>> *> In a video games, you can also have such relations,*
>>
>
> Yes, so what?
>
> *> them being processed in the physical reality, or in a brain in a vat,
>> or in arithmetic, the same effect can take place,*
>
>
> A brain in a vat is part of physical reality and so is a brain in a bone
> box atop your shoulders. And forget video games, arithmetic can't even
> calculate 2+2 anymore the English word "cat" can have kittens because a
> language by itself can't do anything.
>
> >>Turing showed that matter can make any computation that can be
>>> composted, what more do you need.
>>
>>
>
> *> Sure,*
>>
>
> I'm glad we agree on something.
>
> *> but we talk on primary matter, and it is this one that you have to
>> explain the role in consciousness,*
>>
>
> To hell with consciousness! Turing explained how matter can behave
> intelligently, and Darwin explained how  natural selection and random
> mutation can produce an animal that behaves intelligently, and I know that
> I am conscious, and I know I am the product of Evolution. If consciousness
> is a brute fact, if consciousness is the inevitable byproduct of
> intelligence, as I think it must be, then there is nothing more of interest
> to be said about it, certainly nobody on this list has said anything of
> more significance about consciousness since I joined the list.
>
> >> You've got it backwards. Again. Turing proved that matter can do
>>> mathematics he did NOT prove that mathematics can do matter,
>>
>>
>> *> Yes, that is my result,*
>>
>
> If you agree with Turing that matter can do mathematics but mathematics
> can NOT do matter then you must also agree that physics is more fundamental
> than mathematics.
>
>
>> > in arithmetic there are infinitely any processes that we cannot
>> predict in advance.
>>
>
> True, but how in the world does that weakness support your claim that
> mathematics tells physics what to do and thus is at the foundation of
> reality when mathematics doesn't know what matter is going to 

Re: Towards Conscious AI Systems (a symposium at the AAAI Stanford Spring Symposium 2019)

2018-11-24 Thread John Clark
On Fri, Nov 23, 2018 at 8:31 AM Bruno Marchal  wrote:


> > *in a precise context, when doing science/mathematics, it is useful to
> have precise mathematical definition.*
>

Sure definitions can be useful but they never cause things to pop into
existence or can tell you anything about the nature of science or
mathematics, all they tell you is what the sound some human beings make
with their mouth or the squiggles they draw with their hands represent,
something that may or may not be part of reality.

*> You define computation through an ontological commitment.*
>

My commitment is with the scientific method, so when you make outlandish
claims (*matter is not needed to make calculations Robison arithmetic alone
can do so,  Kleene’s predicate T(x, y, z) can encode information*) I ask
you to actually do so. I don't ask you to tell me about it, anybody can
spin a tale in the English language or the Mathematical language, I ask you
to actually make a calculation or encode some information without using
matter that obeys the laws of physics. I don't want more squiggles made of
ink I want you to perform a experiment that can be repeated.  I'm not being
unreasonable in my request, I'm just asking you to be scientific.  If you
can successfully do all that I'll do a 180, my opinion of your work will
change radically because I have no loyalty or sentimentality, if a idea
doesn't work I reject it if it does work I embrace it until I find
something that works even better.


> > *That is not the standard way to proceed in this field,*
>

True, that's not the way things are done in the Junk Science field, Voodoo
priests would not approve at all.

>>Definitions do not change reality and you're never going to discover
>> anything new just by making definitions.
>
>
> > *Any formal or mathematical definition will do,*
>

Will do what? Change reality?

*>That all computations are executed in arithmetic is just a standard fact
> knows since 1931-1936. *
>

And it has also been know that arithmetic can only be performed by matter
that obeys the laws of physics.


> > *That simply cannot work, unless you are right about the non existence
> of the first person indeterminacy, *
>

First person indeterminacy? Oh yes, the idea that you can't always be
certain what will happen next. I believe that monumental discovery was made
by the great thinker and philosopher Og The Caveman.


> >>We've observed experimentally that a change in matter changes
>> consciousness and a change in consciousness changes matter, I don't see how
>> you could get better evidence than that indicating matter and consciousness
>> are related.
>
>
> *> In a video games, you can also have such relations,*
>

Yes, so what?

*> them being processed in the physical reality, or in a brain in a vat, or
> in arithmetic, the same effect can take place,*


A brain in a vat is part of physical reality and so is a brain in a bone
box atop your shoulders. And forget video games, arithmetic can't even
calculate 2+2 anymore the English word "cat" can have kittens because a
language by itself can't do anything.

>>Turing showed that matter can make any computation that can be composted,
>> what more do you need.
>
>

*> Sure,*
>

I'm glad we agree on something.

*> but we talk on primary matter, and it is this one that you have to
> explain the role in consciousness,*
>

To hell with consciousness! Turing explained how matter can behave
intelligently, and Darwin explained how  natural selection and random
mutation can produce an animal that behaves intelligently, and I know that
I am conscious, and I know I am the product of Evolution. If consciousness
is a brute fact, if consciousness is the inevitable byproduct of
intelligence, as I think it must be, then there is nothing more of interest
to be said about it, certainly nobody on this list has said anything of
more significance about consciousness since I joined the list.

>> You've got it backwards. Again. Turing proved that matter can do
>> mathematics he did NOT prove that mathematics can do matter,
>
>
> *> Yes, that is my result,*
>

If you agree with Turing that matter can do mathematics but mathematics can
NOT do matter then you must also agree that physics is more fundamental
than mathematics.


> > in arithmetic there are infinitely any processes that we cannot predict
> in advance.
>

True, but how in the world does that weakness support your claim that
mathematics tells physics what to do and thus is at the foundation of
reality when mathematics doesn't know what matter is going to do even
though matter always ends up doing something?

>> Neither Mathematics or English or any other language will ever be Turing
>> universal, but matter is not a language and we've known since 1936 that it
>> is Turing universal.
>
>
> >*You insist confusing the language of mathematics and the object talked
> about using that language.*
>

It was you not me that insisted Robison arithmetic alone can make
calculations and "T(x, y, 

Re: Towards Conscious AI Systems (a symposium at the AAAI Stanford Spring Symposium 2019)

2018-11-24 Thread John Clark
On Fri, Nov 23, 2018 at 1:10 PM Philip Thrift  wrote:

*> Some in AI will say if something is just informationally intelligent (or
> pseudo-intelligent) but not experientially intelligent then it will not
> ever be remarkably creative - in literature, music, painting, or even
> science.*
>

Apparently being remarkably creative is not required to be supremely good
at Chess or GO or solving equations because pseudo-intelligence will beat
true-intelligence at those things every time. The goal posts keep moving,
true intelligence is whatever computers aren't good at. Yet.


> > And it will not be conscious,
>

My problem is if the AI is smarter than me it will outsmart me, but if the
AI isn't conscious that's the computers problem not mine. And besides, I'll
never know if the AI is conscious or not just as I'll never know if you are.


> >*as all humans are.*
>

Most humans are NOT remarkably creative in literature, music, painting or
science; so why do you think all humans are conscious?

John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Measuring a system in a superposition of states vs in a mixed state.

2018-11-24 Thread agrayson2000


On Friday, November 23, 2018 at 5:29:05 PM UTC, John Clark wrote:
>
>
> agrays...@gmail.com 
>
> *>** So Feynman adds this additional hypothesis to QM. Is this kosher?*
>
>  
> It had better be kosher because it works!
>
> *> ** introducing an infinity of universes seems extraneous and confusing 
>> for a solution to this problem. AG*
>
>  
> Far from being extraneous Feynman's method is the easiest way to make a 
> calculation in Quantum Electrodynamics, a calculation that would take weeks 
> or months using other methods can be done with pencil and paper in just a 
> few hours doing it Feynman's way. Feynman said the magnetic moment of an 
> electron can't be exactly 1 as had been previously thought, he calculated 
> it to be 1.00115965246, while the best experimental value that was found 
> much later is   1.00115965221. That's like measuring the distance between 
> Los Angeles and New York to the thickness of a human hair.  This is the 
> most accurate prediction in all of science, Feynman must have been doing 
> something right
>
> *> I don't like this approach -- in fact I abhor it*
>
>
> The Universe likes it, and it's likes and dislikes are far more important 
> than yours.
>

*Why do you make this gratuitous point. and on a regular basis, when you 
habitually indicate which theories you like or don't like? Does the 
universe care what theories you like? I doubt it cares, or can care. Some 
interpretations are more useful than others for calculating purposes. 
Doesn't imply they have greater ontological value. AG *

>
>  John K Clark 
>
>
>  
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Adam and Eve’s Anthropic Superpowers

2018-11-24 Thread Philip Thrift


On Friday, November 23, 2018 at 4:11:26 PM UTC-6, Mason Green wrote:
>
> Hi everyone, 
>
> I found an interesting blog post that attempts to refute the Doomsday 
> Argument. It suggests that different worlds ought to be weighted by the 
> number of people in them, so that you should be more likely to find 
> yourself in a world where there will be many humans, as opposed to just a 
> few. This would cancel out the unlikeliness of finding yourself among the 
> first humans in such a world. 
>
> I’m curious as to what the contributors here think. (I’m new here, I found 
> out about this list through Russell’s Theory of Nothing book). 
>
> https://risingentropy.com/2018/09/06/adam-and-eves-anthropic-superpowers/ 
>
> -Mason



Without examining the theoretical details of this (or any) probabilistic 
argument (including Bayesian ones), one general approach is this: The 
theory may all be correct of course (given accepted assumptions), but it's 
ultimately convincing when results are compared to Monte Carlo computer 
experiments. (If you don't like don't "trust" your software's random 
numbers, then you can get some from [ 
https://www.fourmilab.ch/hotbits/secure_generate.html ]).

Say in the case of "In front of you is a jar. This jar contains either 10 
balls or 100 balls. The balls are numbered in order from 1 to either 10 or 
100." Then you you write a program that randomly creates either a 
10ball-jar with probability 0.50 (or any p) or a 100ball-jar with 
probability 0.50 (or 1-p) and then pick a ball at random. You run this 
10,000 times (or whatever) and just get statistics.

You can do this for the Monte Hall problem - which has the irony that Monte 
Carlo "solves" the Monte Hall problem!

- pt
 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.