Re: Adam and Eve’s Anthropic Superpowers

2018-11-25 Thread Philip Thrift


*Coq* seems to be a widely used proof assistant language with lots of 
examples of use.

A recent proof system I've seen written about is *Imandra* 
[ https://www.imandra.ai/ ] which has been applied to the Monte Hall 
problem 
[ 
https://medium.com/imandra/reasoning-about-probabilities-in-reasonml-2b43bf01873e
 
].


As for theorem provers for modal logics, there's some reference to such in

*Automating Gödel’s Ontological Proof of God’s Existence with Higher-order 
Automated Theorem Provers*
[ http://page.mi.fu-berlin.de/cbenzmueller/papers/C40.pdf ]



- pt



On Sunday, November 25, 2018 at 12:12:18 PM UTC-6, Mark Buda wrote:

> What I did at the time was write a Monte Carlo simulation. I don't know 
> any of those languages well enough to do anything useful with them yet.
>
> I want to use one of the proof assistant languages for some the AGI stuff 
> I am trying to work on, but I am not sure yet which one best suits my 
> needs. I don't know enough about them or my needs to decide yet. I am 
> leaning towards Coq, but if I find something that looks like it plays 
> better with modal logic that might sway me.
>
> I have to finish some poorly thought out OAuth2 code before I have enough 
> data sources working that I can turn my attention to how I think the system 
> I am trying to build should reason about those data sources.
>
> At that point, I will want to write statements about the data sources, and 
> will need to pick a language in which to write those statements.
> --
> Mark Buda >
> I get my monkeys for nothing and my chimps for free.
>
>
> On Sun, Nov 25, 2018, 11:27 AM Philip Thrift   wrote:
>
>>
>> On the other hand, some say you really don't understand something unless 
>> you can write a program that encodes that understanding.
>>
>> Can you encode your understanding of the Monte Hall problem in a 
>> "logical" (or proof assistant) language?
>>
>> - pt
>>
>> On Sunday, November 25, 2018 at 10:11:08 AM UTC-6, Mark Buda wrote:
>>>
>>> When presented with the Monty Hall problem, I could not understand it 
>>> without writing a program to help me. I guess that puts me in the good 
>>> company of Paul Erdos, according to Wikipedia...
>>> --
>>> Mark Buda 
>>> I get my monkeys for nothing and my chimps for free.
>>>
>>> On Sat, Nov 24, 2018, 6:58 PM John Clark >>
 On Sat, Nov 24, 2018 at 5:01 PM Brent Meeker  
 wrote:

 *> The best intuition pump to solve the Monte Hall problem is to 
> imagine that there are 100 doors and Monte opens all the doors except the 
> one you chose and one otherdo you switch?*


 3 doors will do. If you follow the switch strategy the only way you 
 would end up losing is if your original guess was correct, and there was 
 only one chance in 3 of that, so if you switch you have 2 chances in 3 of 
 winning. 

 John K Clark  


>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Adam and Eve’s Anthropic Superpowers

2018-11-25 Thread Mark Buda
What I did at the time was write a Monte Carlo simulation. I don't know any
of those languages well enough to do anything useful with them yet.

I want to use one of the proof assistant languages for some the AGI stuff I
am trying to work on, but I am not sure yet which one best suits my needs.
I don't know enough about them or my needs to decide yet. I am leaning
towards Coq, but if I find something that looks like it plays better with
modal logic that might sway me.

I have to finish some poorly thought out OAuth2 code before I have enough
data sources working that I can turn my attention to how I think the system
I am trying to build should reason about those data sources.

At that point, I will want to write statements about the data sources, and
will need to pick a language in which to write those statements.
--
Mark Buda 
I get my monkeys for nothing and my chimps for free.


On Sun, Nov 25, 2018, 11:27 AM Philip Thrift 
> On the other hand, some say you really don't understand something unless
> you can write a program that encodes that understanding.
>
> Can you encode your understanding of the Monte Hall problem in a "logical"
> (or proof assistant) language?
>
> - pt
>
> On Sunday, November 25, 2018 at 10:11:08 AM UTC-6, Mark Buda wrote:
>>
>> When presented with the Monty Hall problem, I could not understand it
>> without writing a program to help me. I guess that puts me in the good
>> company of Paul Erdos, according to Wikipedia...
>> --
>> Mark Buda 
>> I get my monkeys for nothing and my chimps for free.
>>
>> On Sat, Nov 24, 2018, 6:58 PM John Clark >
>>> On Sat, Nov 24, 2018 at 5:01 PM Brent Meeker 
>>> wrote:
>>>
>>> *> The best intuition pump to solve the Monte Hall problem is to imagine
 that there are 100 doors and Monte opens all the doors except the one you
 chose and one otherdo you switch?*
>>>
>>>
>>> 3 doors will do. If you follow the switch strategy the only way you
>>> would end up losing is if your original guess was correct, and there was
>>> only one chance in 3 of that, so if you switch you have 2 chances in 3 of
>>> winning.
>>>
>>> John K Clark
>>>
>>>
 --
>>>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Adam and Eve’s Anthropic Superpowers

2018-11-25 Thread Philip Thrift

On the other hand, some say you really don't understand something unless 
you can write a program that encodes that understanding.

Can you encode your understanding of the Monte Hall problem in a "logical" 
(or proof assistant) language?

- pt

On Sunday, November 25, 2018 at 10:11:08 AM UTC-6, Mark Buda wrote:
>
> When presented with the Monty Hall problem, I could not understand it 
> without writing a program to help me. I guess that puts me in the good 
> company of Paul Erdos, according to Wikipedia...
> --
> Mark Buda >
> I get my monkeys for nothing and my chimps for free.
>
> On Sat, Nov 24, 2018, 6:58 PM John Clark  
> wrote:
>
>> On Sat, Nov 24, 2018 at 5:01 PM Brent Meeker > > wrote:
>>
>> *> The best intuition pump to solve the Monte Hall problem is to imagine 
>>> that there are 100 doors and Monte opens all the doors except the one you 
>>> chose and one otherdo you switch?*
>>
>>
>> 3 doors will do. If you follow the switch strategy the only way you would 
>> end up losing is if your original guess was correct, and there was only one 
>> chance in 3 of that, so if you switch you have 2 chances in 3 of winning. 
>>
>> John K Clark  
>>
>>
>>> --
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Adam and Eve’s Anthropic Superpowers

2018-11-25 Thread Mark Buda
When presented with the Monty Hall problem, I could not understand it
without writing a program to help me. I guess that puts me in the good
company of Paul Erdos, according to Wikipedia...
--
Mark Buda 
I get my monkeys for nothing and my chimps for free.

On Sat, Nov 24, 2018, 6:58 PM John Clark  On Sat, Nov 24, 2018 at 5:01 PM Brent Meeker  wrote:
>
> *> The best intuition pump to solve the Monte Hall problem is to imagine
>> that there are 100 doors and Monte opens all the doors except the one you
>> chose and one otherdo you switch?*
>
>
> 3 doors will do. If you follow the switch strategy the only way you would
> end up losing is if your original guess was correct, and there was only one
> chance in 3 of that, so if you switch you have 2 chances in 3 of winning.
>
> John K Clark
>
>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Towards Conscious AI Systems (a symposium at the AAAI Stanford Spring Symposium 2019)

2018-11-25 Thread John Clark
On Sun, Nov 25, 2018 at 4:40 AM Philip Thrift  wrote:

 Dennett's said:

“*The elusive subjective conscious experience—the redness of red, the
painfulness of pain—that philosophers call qualia? Sheer illusion*.”

The trouble with the above statement isn't so much that it's false, the
trouble is that it's silly. In the first place an illusion is a
misinterpretation of the senses, but pain is direct experience that needs
no interpretation. I would love to ask Mr. Dennett how things would be
different if pain was not an illusion, if he can't answer that, and I don't
think he could, then the statement "pain is a illusion" contains no
information.

And illusion itself is a conscious phenomena, so saying consciousness is an
illusion is just saying consciousness is consciousness which, although
true, is not very illuminating. When discussing any philosophical issue the
word "illusion" should be used very cautiously. And if the topic involves
consciousness or quala and silliness is to be avoided the word "illusion"
should never be used at all because it explains nothing.

John K Clark



>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Towards Conscious AI Systems (a symposium at the AAAI Stanford Spring Symposium 2019)

2018-11-25 Thread Philip Thrift


On Saturday, November 24, 2018 at 5:10:49 PM UTC-6, Brent wrote:
>
>
>
> On 11/24/2018 5:39 AM, John Clark wrote:
>
>
> On Fri, Nov 23, 2018 at 1:10 PM Philip Thrift  > wrote:
>
> *> Some in AI will say if something is just informationally intelligent 
>> (or pseudo-intelligent) but not experientially intelligent then it will not 
>> ever be remarkably creative - in literature, music, painting, or even 
>> science.*
>>
>
> Apparently being remarkably creative is not required to be supremely good 
> at Chess or GO or solving equations because pseudo-intelligence will beat 
> true-intelligence at those things every time. The goal posts keep moving, 
> true intelligence is whatever computers aren't good at. Yet. 
>  
>
>> > And it will not be conscious,
>>
>
> My problem is if the AI is smarter than me it will outsmart me, but if the 
> AI isn't conscious that's the computers problem not mine. And besides, 
> I'll never know if the AI is conscious or not just as I'll never know if 
> you are.
>
>
> The question is whether the AI will ever infer it is not conscious.  I 
> think Bruno correctly points out that this would be a contradiction. If it 
> can contemplate the question, it's conscious even though it can't prove it.
>
> Brent
>




As Galen Strawson points out, there are supposedly smart people around 
today - like Daniel Dennett - who don't believe they are conscious:

[ https://www.nybooks.com/daily/2018/03/13/the-consciousness-deniers/ ]

*Ned Block* [ https://en.wikipedia.org/wiki/Ned_Block ] *once remarked that 
Dennett’s attempt to fit consciousness or “qualia” into his theory of 
reality “has the relation to qualia that the US Air Force had to so many 
Vietnamese villages: he destroys qualia in order to save them.”*

*...*

*This is how philosophers in the twentieth century came to endorse the 
Denial, the silliest view ever held in the history of human thought. “When 
I squint just right,” Dennett writes in 2013, “it does sort of seem that 
consciousness must be something in addition to all the things it does for 
us and to us, some special private glow or here-I-am-ness that would be 
absent in any robot… But I’ve learned not to credit the hunch. I think it 
is a flat-out mistake, a failure of imagination.” His position was 
summarized in an interview in The New York Times: “The elusive subjective 
conscious experience—the redness of red, the painfulness of pain—that 
philosophers call qualia? Sheer illusion.” If he’s right, no one has ever 
really suffered, in spite of agonizing diseases, mental illness, murder, 
rape, famine, slavery, bereavement, torture, and genocide. And no one has 
ever caused anyone else pain.*

*This is the Great Silliness. We must hope that it doesn’t spread outside 
the academy, or convince some future information technologist or roboticist 
who has great power over our lives.*


- pt

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.