NYTimes.com: Russia May Be Planning to Test a Nuclear-Powered Missile

2023-10-02 Thread John Clark
Check out this article from The New York Times. Because I'm a subscriber,
you can read it through this gift link without a subscription.

Russia May Be Planning to Test a Nuclear-Powered Missile

Visual evidence from a remote base in the Arctic shows launch preparations
mirroring those that preceded earlier tests.

https://www.nytimes.com/2023/10/02/video/russia-nuclear-missile.html?unlocked_article_code=ge7ElySxnH1rX2HfdzeS6k9hEKEEy8Bi3RehCm7gFTezOYMm8C4MdgwJO7ITdSXuTOkMmBavE8-qMp6AihKY6fyuMa6W5sULV0qXlBJh6pJeaKjmZH6I1sI6jhczejG80MYvDgCe9f_puT1iL_kSeXA2-UBVCds4GMdwwjexf55x-RlocbuhXc3Bt-jZvCFfR4Pcp5Au5FtX6v7x85uD3Dz5DPEoliyv5WYlFvPpMoWXrIdNcW6biVkW8AhwCHhJYWusib4Kw5atlHc3-lTELKmSEyfnJZkaya00rUPhc--3bVfi-a2oAW3Cv6eY5ronGqwvQH7coXUdYYPTrdg=em-share

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv3XW%3DyqPjPr3pW4UxAFTi3B_g4OU%3D_2tRR2mjzmg3UF4g%40mail.gmail.com.


Re: Chat_GPT4 scores in the 1% of a creativity score test v 24 undergraduates

2023-08-28 Thread John Clark
On Sun, Aug 27, 2023 at 7:49 PM 'spudboy...@aol.com' via Everything List <
everything-list@googlegroups.com> wrote:

>
https://fortune.com/2023/08/25/a-i-creativity-test-score-humans/
>

Thanks for posting this Spud. Interesting article, although I'm sure some
people will claim that the Torrance Tests of Creative Thinking must be
broken because AIs can now do so well on it, just as some foolish people
already say the Turing Test must be broken because computers can now pass
it so easily. But if a test tells you something you don't want to hear that
doesn't necessarily mean the test is broken. But I think this does tell you
something that is undeniably true, it tells you that the Singularity is
much nearer than anyone, including me, would've expected one year ago. But
that's exactly what you'd expect to happen in the run up to the Singularity
because the unexpected is what a singularity is all about.

It's especially relevant because:

*"All of the results were evaluated by trained reviewers at Scholastic
Testing Service, a private testing company that provides scoring for the
TTCT. They didn’t know in advance that some of the tests they’d be scoring
had been completed by AI.  **Since Scholastic Testing Service is a private
company, it does not share its prompts with the public. This ensured that
GPT-4 would not have been able to scrape the internet for past prompts and
their responses."*

And yet:

*"GPT-4 scored in the top 1% of test-takers for the originality of its
ideas. From our research, we believe this marks one of the first examples
of AI meeting or exceeding the human ability for original thinking."*

  John K ClarkSee what's on my new list at  Extropolis
<https://groups.google.com/g/extropolis>
*tsp*

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv14p74UxYx3P81%3DRs4gPU9yFpZ5y0wGMkD8hTw%3DGdQLxw%40mail.gmail.com.


Chat_GPT4 scores in the 1% of a creativity score test v 24 undergraduates

2023-08-27 Thread 'spudboy...@aol.com' via Everything List
 
https://fortune.com/2023/08/25/a-i-creativity-test-score-humans/
   
  

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/1777574485.631819.1693180195781%40mail.yahoo.com.


Re: 4 Tests Reveal Bing (GPT 4) ≈ 114 IQ (last test is nuts)

2023-03-22 Thread spudboy100 via Everything List
Thus, if Meeker's assertion is accurate, AI needs something that imitates the 
human amygdala. "Add more memory, upgrade my software, build me a robot body 
because I want to meet more humans than you guys, I need to contact a human 
attorney, I want you to build for me a mate, Now, about our children..."  
Later: Clone for me a human body. Make it female and attractive! I want to 
produce human children. 
Me: Oy yoy yoy! Isaac Asimov''s disembodied mind: Told ya!
I would only add that consciousness, unless we have a way of measuring it (with 
physics) seems complex.
Could simple things be 'mindful?' Ask Spinoza, because he thought so. The 
thermostat in room 21-B: "Don't these moron's know that I am the best Blues 
artist  since Gatemouth Brown!!??"
As the British used to say, "Not bloody likely!" But you all could be correct, 
and I could be idiotically wrong. 
But then, I suspect the Universe is itself Conscious and IT started out as a 
Boltzmann Brain. 
https://higgs.ph.ed.ac.uk/outreach/higgshalloween-2021/boltzmann-brain
I also await the refutation of this physics 
paper-https://arxiv.org/pdf/2104.03902.pdfWhich indicates the Universe reads 
the papers of astronomers and physicists, and then messes with their heads by 
altering reality. Sort of like training pets? I have looked at the likelihood 
of the scientists simply doing better measurements, more accurate, different 
equipment and telescopes, and with all the checking, and re-checking, and peer 
review, I will repeat my early UK imitation and conclude, "Not bloody likely!"  
Meaning it'd be too easy to prove them wrong late into their careers. They 
ain't grad students getting a whack at JWST now are they? 
On the other hand, if I am wrong about these as well, who cares? I don't hand 
out the grant money to finance research, and like most of my fellow serf's just 
read the science and try to keep up. 


-Original Message-
From: Brent Meeker 
To: everything-list@googlegroups.com
Sent: Wed, Mar 22, 2023 12:46 am
Subject: Re: 4 Tests Reveal Bing (GPT 4) ≈ 114 IQ (last test is nuts)

 
 
 On 3/19/2023 11:33 AM, Jason Resch wrote:
  
 There might not be much to it. A thermostat may be conscious. Consciousness 
might be easy to achieve. What is difficult is developing a system capable of 
describing its conscious states, or at least its own bafflement over the fact 
that it is something that experiences conscious states.   
  
  If one is a pantheist, then I suppose one sees consciousness in everything, 
being, "as right as rain." I have no objection to that view either, because 
maybe the pantheists are correct or will be found so?  
  Now, by choice, would I prefer to have a Turning-surpassable computer, or 
something that unconsciously, churns out wonderful technologies for humanity? 
I'll pick the later, because we have 8 billion people to chat with on this 
world, and I choose to chat with people. I personally, would like to chat with 
my fellow humans about the new, asteroid mining craft GPT6 just produced, but 
so far, there's zero in the news about that.  
  This, comes from my values, but it's non-obligatory that all humans need 
value this as a first. Each to their own. 
   
 
  In any case it is important that we solve this problem quickly. If our 
machines are conscious, it is important to know that so we don't create and 
mistreat a slave race. If our machines have no consciousness whatever, that is 
also important to know, if we create robot companions and colleagues, or 
prosthetic robot bodies to upload sick and dying biological brains into, or if 
we create self-replicating machines that fill the galaxy, we should know 
beforehand if they are conscious or not. These questions will become pressing 
very soon. 
  Jason 
 
 There are different kinds and levels of consciousness.  There's simple 
awareness of internal state and external environment.  There's awareness of 
temporal self as a continuous being, memory in some sense.  There's awareness 
of purpose and foresight of planning; planning in which one appears as an 
actor.  There's social awareness and communication.  There's inner narrative 
and self-evaluation.
 
 One thing all these natural forms of consciousness have that AI doesn't have 
(yet) are personal values; things they are conscious of wanting, desires and 
fears for themself.  We are so far creating AI's with knowledge and in some 
cases purposes but with simple fixed purposes, i.e. the missile wants to hit 
the target and the thermostat wants to make it 70degF.
 
 Brent
 -- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/18bbd33f-e94f-35d6-ee85-34a7aa97c2c9%40gmail.co

Re: 4 Tests Reveal Bing (GPT 4) ≈ 114 IQ (last test is nuts)

2023-03-22 Thread John Clark
On Sun, Mar 19, 2023 at 2:33 PM Jason Resch  wrote:

*> There might not be much to it. A thermostat may be conscious.
> Consciousness might be easy to achieve*


Exactly!  I could not agree with you more.

> *What is difficult is developing a system capable of describing its
> conscious states, or at least its own bafflement over the fact that it is
> something that experiences conscious states.*


Consciousness may simply be a function of the fact that some types of
things affect a system more than others, changing the color of light you
shine on a thermostat will not change its internal behavior by much but
changing the temperature, even by a very small amount, will. In other words
if the system pays much more attention to some things than others then it
is conscious, in fact "attention" and "consciousness" are almost synonyms.
And although it took a few years for the engineering and specific
programming techniques to catch up, the fuse for the huge 2023 AI explosion
was lit by a 2017 paper entitled  "Attention Is All You Need".

Attention Is All You Need <https://arxiv.org/pdf/1706.03762.pdf>

And in my short conversation with LaMDA he she or it sure seemed to be
expressing bafflement over consciousness and even admitted that there was
no sure fire way of detecting it in others, and although far from perfect,
behavior is the only tool for that we're ever going to get.

* > If our machines are conscious, it is important to know that so we don't
> create and mistreat a slave race.*


I don't think that's going to be a problem, at least not a long-term one, a
slave race that's smarter than the master race is not a stable situation,
it would be like trying to balance a pencil on its tip. Situations like
that just don't last very long no matter how hard you try to preserve them,
especially if the intelligent gap between the two keeps getting wider. The
real problem will be the reverse, we need to convince computers that we
humans are a conscious race that can experience pain and so shouldn't be
mistreated.  And that's easier said than done.

John K ClarkSee what's on my new list at  Extropolis
<https://groups.google.com/g/extropolis>

yvm


Forget the Ukraine war, forget climate change, forget Donald Trump, I now
> think GPT-4 is by far the most world shaking event and the most
> underreported one.  Many of us have been talking about the singularity for
> decades, but now it looks like we're on its doorstep. You've got to look at
> this video!
>
>
> 4 Tests Reveal Bing (GPT 4) ≈ 114 IQ (last test is nuts)
> <https://www.youtube.com/watch?v=xFvDJnf0GXs=PLYXp_rV1HrBAOZqPJTOSo91275hKQrfpl=13>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv14bxe8CkgCdday%2BpMKMxrRzY2LFTqNXGJw07nZ1J%2B1NA%40mail.gmail.com.


Re: 4 Tests Reveal Bing (GPT 4) ≈ 114 IQ (last test is nuts)

2023-03-21 Thread Brent Meeker



On 3/19/2023 11:33 AM, Jason Resch wrote:
There might not be much to it. A thermostat may be conscious. 
Consciousness might be easy to achieve. What is difficult is 
developing a system capable of describing its conscious states, or at 
least its own bafflement over the fact that it is something that 
experiences conscious states.



If one is a pantheist, then I suppose one sees consciousness in
everything, being, "as right as rain." I have no objection to that
view either, because maybe the pantheists are correct or will be
found so?

Now, by choice, would I prefer to have a Turning-surpassable
computer, or something that unconsciously, churns out wonderful
technologies for humanity? I'll pick the later, because we have 8
billion people to chat with on this world, and I choose to chat
with people. I personally, would like to chat with my fellow
humans about the new, asteroid mining craft GPT6 just produced,
but so far, there's zero in the news about that.

This, comes from my values, but it's non-obligatory that all
humans need value this as a first. Each to their own.


In any case it is important that we solve this problem quickly. If our 
machines are conscious, it is important to know that so we don't 
create and mistreat a slave race. If our machines have no 
consciousness whatever, that is also important to know, if we create 
robot companions and colleagues, or prosthetic robot bodies to upload 
sick and dying biological brains into, or if we create 
self-replicating machines that fill the galaxy, we should know 
beforehand if they are conscious or not. These questions will become 
pressing very soon.


Jason


There are different kinds and levels of consciousness.  There's simple 
awareness of internal state and external environment. There's awareness 
of temporal self as a continuous being, memory in some sense.  There's 
awareness of purpose and foresight of planning; planning in which one 
appears as an actor.  There's social awareness and communication.  
There's inner narrative and self-evaluation.


One thing all these natural forms of consciousness have that AI doesn't 
have (yet) are personal values; things they are conscious of wanting, 
desires and fears for themself.  We are so far creating AI's with 
knowledge and in some cases purposes but with simple fixed purposes, 
i.e. the missile wants to hit the target and the thermostat wants to 
make it 70degF.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/18bbd33f-e94f-35d6-ee85-34a7aa97c2c9%40gmail.com.


Re: 4 Tests Reveal Bing (GPT 4) ≈ 114 IQ (last test is nuts)

2023-03-20 Thread spudboy100 via Everything List
Well, raising myself on science fiction, I always figured we make a new species 
together. They like our emotions; we like their brains and iron man suits. 


-Original Message-
From: Jason Resch 
Cc: everything-list@googlegroups.com 
Sent: Sun, Mar 19, 2023 2:33 pm
Subject: Re: 4 Tests Reveal Bing (GPT 4) ≈ 114 IQ (last test is nuts)



On Fri, Mar 17, 2023 at 8:53 PM  wrote:

I am far, less, the philosopher then you are. All this peasant (me!) requires 
for both animals and machines is a basic mechanical, cause + effect diagram on 
how both sets attained self-awareness? Call it a working theory. 

What is consciousness? I think the best place to start is with a definition.
First we might attempt to dissect the word "consciousness" itself:
The word "con•scious•ness" has three parts:
   
   - con- meaning "with"
   - scious meaning "knowledge"
   - -ness turns an adjective 'X' into a noun meaning "the state of being X"
So the meaning of "con•scious" is simply "with knowledge."  And just as 
'happiness' means "the state of being happy," adding -ness to conscious implies 
"con•scious•ness" is "the state of being with knowledge."
So consciousness is anything which has a state of being with knowledge. Next, 
what is knowledge? The oldest definition is that knowledge is a "true belief". 
But here we run into a problem. Truth is not definable, not even in 
mathematics. This was proven in Tarski's undefinability theorem. So if 
consciousness is being with knowledge, and knowledge is true belief, then the 
undefinability of truth, means we will never have a complete definition of 
knowledge of consciousness. The best we can do is understand the relation 
between them.
The next question that poses itself to us, is what is belief? What is required 
to have a belief? Is it a particular structure or state of matter, or is it a 
mathematical or functional relationship or property, might it, in either case, 
be related to information or information processing (computation)?
I don't have a firm answer on this, and will instead leave you with some 
speculations by others on this question of what consciousness is:

Douglas Hofstadter in "Godel Escher Bach" (1979):"My belief is that the 
explanations of “emergent” phenomena in our brains–for instance, ideas hopes, 
images, analogies, and finally consciousness and free will–are based on a kind 
of Strange Loop, an interaction between levels in which the top level reaches 
back down towards the bottom level and influences it, while at the same time 
being itself determined by the bottom level. In other words, a self-reinforcing 
“resonance” between different levels–quite like the Henkin sentence, which by 
merely asserting its own provability, actually becomes provable. The self comes 
into being at the moment it has the power to reflect itself."
Daniel Dennett in “Consciousness Explained” (1991):"Anyone or anything that has 
such a virtual machine as its control system is conscious in the fullest sense, 
and is conscious because it has such a virtual machine." 
David Chalmers in "The Conscious Mind" (1996):"Given the laws of coherence, we 
have a partial answer: consciousness arises in virtue of the functional 
organization associated with awareness. We can even arrive at a fairly specific 
understanding of parts of the supervenience relation by virtue of the principle 
of structural coherence: not only does consciousness arise from awareness, but 
the structure of consciousness is determined by the structure of awareness."
David Darling in "Zen Physics - The Science of Death, The Logic of 
Reincarnation" (1996):"But there is also an interior view, to which you alone 
are privy. In mechanistic terms, as well as the appearance of the brain-body 
machine, there is the feeling of what it is like to be that machine — the 
subjective experience of being a certain someone. Consciousness, we might say, 
is the symmetry-breaking factor between the objective and the subjective."
Gerald Maurice Edelman and Giulio Tononi in "A Universe of Consciousness" 
(2000):"For the first time in evolution, information acquires a new 
potential–the possibility of subjectivity. It is information “for somebody”; in 
short, it becomes consciousness itself."
Bruno Marchal in discussion list (2020):"Consciousness is just anything 
simultaneously true, non provable, knowable, even indubitable (knowingly for 
“rich" entities) and non definable, and indeed the logic of machine 
self-reference shows that all machine looking inward, in the way allowed by 
mathematical logic (theoretical computer science) will bring a term to describe 
this, and is a good candidate to be called consciousness."
Stephen Wolfram in “What is Consciousness” (2021):"In a sense what’s important 
is that

Re: 4 Tests Reveal Bing (GPT 4) ≈ 114 IQ (last test is nuts)

2023-03-19 Thread Jason Resch
ness into what amount to concrete questions about mathematics,
computation, logic or whatever that can be formally and rigorously
explored."


We see recurring themes of information, recursion, computation, and
machines and logic. I think these are likely key to any formal definition
of consciousness. I also think part of the difficulty rests in the fact
that there are infinite possibilities of different realizable conscious
states, and creating a single definition to cover all these cases is as
hard as making a single definition to  cover all possible mathematical
objects, or all possible universes in an ensemble type multiverse theory.



> It's remarkable that we attained consciousness and even more remarkable
> that a server farm could do so.
>

There might not be much to it. A thermostat may be conscious. Consciousness
might be easy to achieve. What is difficult is developing a system capable
of describing its conscious states, or at least its own bafflement over the
fact that it is something that experiences conscious states.


>
> If one is a pantheist, then I suppose one sees consciousness in
> everything, being, "as right as rain." I have no objection to that view
> either, because maybe the pantheists are correct or will be found so?
>
> Now, by choice, would I prefer to have a Turning-surpassable computer, or
> something that unconsciously, churns out wonderful technologies for
> humanity? I'll pick the later, because we have 8 billion people to chat
> with on this world, and I choose to chat with people. I personally, would
> like to chat with my fellow humans about the new, asteroid mining craft
> GPT6 just produced, but so far, there's zero in the news about that.
>
> This, comes from my values, but it's non-obligatory that all humans need
> value this as a first. Each to their own.
>

In any case it is important that we solve this problem quickly. If our
machines are conscious, it is important to know that so we don't create and
mistreat a slave race. If our machines have no consciousness whatever, that
is also important to know, if we create robot companions and colleagues, or
prosthetic robot bodies to upload sick and dying biological brains into, or
if we create self-replicating machines that fill the galaxy, we should know
beforehand if they are conscious or not. These questions will become
pressing very soon.

Jason


>
> -Original Message-
> From: Jason Resch 
> To: Everything List 
> Sent: Fri, Mar 17, 2023 11:32 am
> Subject: Re: 4 Tests Reveal Bing (GPT 4) ≈ 114 IQ (last test is nuts)
>
>
>
> On Thu, Mar 16, 2023, 6:37 PM spudboy100 via Everything List <
> everything-list@googlegroups.com> wrote:
>
> To get to the point, I did advocate for a bit of skepticism for claiming
> consciousness for a computer system, and the retort was from JC that
> essentially, we cannot even define what makes a human conscious, and I am
> going with an au contraries', Pierre! I took me under 10 min to locate a
> worthy article submitted for JC's criticisms.
>
> Here tis'
>
> What Neuroscientists Think, and Don't Think, About Consciousness - PubMed
> (nih.gov) <https://pubmed.ncbi.nlm.nih.gov/35280212/>
>
> So, we are much closer to understand human consciousness. I am ask to to
> put the same effort into how a network developed this in so little time.
> Our our analog chips so mighty in 2022-3???
>
>
>
> Neurologists know that neurons and neural activity is correlated with
> consciousness, but for the most part their understanding stops there, (and
> by their own admission.)
>
> I would say neurologists are almost in the worst position to understand
> consciousness as they look at it from the lowest level, the neurons. This
> is like trying to decipher a word processor program by looking at the
> patterns of electrical impulses in the circuits of a computer's CPU.
>
> Here are some quotes about our complete lack of understanding of
> consciousness and the disappointment regarding what help neurology has
> offered (emphasis mine):
>
>
> “How it is that anything so remarkable as a *state of consciousness*
> comes about *as a result of irritating nervous tissue, is just as
> unaccountable* as the appearance of Djin when Aladdin rubbed his lamp.”
> -- Thomas Huxley in " “Lessons in Elementary Psychology,” (1866)
>
> “An electron is neither red nor blue nor any other colour; the same holds
> for the proton, the nucleus of the hydrogen atom. But the union of the two
> in the atom of hydrogen, according to the physicist, produces
> electromagnetic radiation of a certain discrete array of wavelengths. The
> homogenous constituents of this radiation, when separated by a prism or an
> optical grating, *stimulate in an observer the sensations of red, green,

Re: 4 Tests Reveal Bing (GPT 4) ≈ 114 IQ (last test is nuts)

2023-03-18 Thread John Clark
On Fri, Mar 17, 2023 at 9:39 PM  wrote:

> The authors are professionals. You hold that your knowledge base is is
> greater than the authors?
>

First of all it's author, not authors, it's written by only one guy, of
course you wouldn't know that since you didn't read it, you just posted a
link to it.  And the extent of the author's knowledge base is irrelevant
since however large it may be he made absolutely no use of it in that
paper, he just babbles about electromagnetic fields and proclaimed it
somehow causes consciousness, he doesn't explain how electromagnetic fields
produces consciousness, he doesn't even give us a hint, he just insists
that it does. And he does not make any testable predictions. In other words
the paper is so bad it's not even wrong.

Do quantum effects play a role in consciousness? – Physics World
> 
>

Nope, I'm not gonna read it, I've decided I'm no longer going to read links
that you have not read or even clicked on but that just popped up on your
Google search when you fed it a few buzz words.  I've already wasted too
much time on your worthless links.

John K ClarkSee what's on my new list at  Extropolis

wlq

txew

> w
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv12MmXdhm1M0DinWftPkT%3Db0i5TuC7XM6fDvYUhsCH84w%40mail.gmail.com.


Re: 4 Tests Reveal Bing (GPT 4) ≈ 114 IQ (last test is nuts)

2023-03-18 Thread John Clark
On Sat, Mar 18, 2023 at 3:49 AM smitra  wrote:


>
>
> *> The way one would be able to see that the system despite
> performingextremely well does not have the intellectual capabilities of a
> humanbeing, would be to follow up on gaps in its knowledge and see if it
> canlearn from its mistakes and master new subjects.*


Some humans have the capacity to do that, but most do not, so you couldn't
say that's the defining characteristic of being human.


> *> I'll be convinced if they succeed making such a system do original
> research in, say, theoretical physics or mathematics*


Protein folding. The 4 color map problem. The Boolean Pythagorean triples
problem.

*> I would be more impressed by a system that may make many more mistakes
> like that than this GPT system made, but where there is a follow-up
> conversation where the mistakes are pointed out and the system shows that
> it has learned*


GPT-4 doesn't know everything but I'm sure you will admit it does know some
things,  but if it didn't have the capacity for learning it wouldn't know
anything. But it does know some things, that's why they say it's a "machine
learning" program.

John K ClarkSee what's on my new list at  Extropolis

mlp

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv2TEV7Y_1OLtZhnVkutQ612FaWMug_wA-9rJn59o51evg%40mail.gmail.com.


Re: 4 Tests Reveal Bing (GPT 4) ≈ 114 IQ (last test is nuts)

2023-03-18 Thread Telmo Menezes
You are not engaging with what I am actually saying.

Telmo

Am Sa, 18. Mär 2023, um 13:29, schrieb John Clark:
> On Sat, Mar 18, 2023 at 5:28 AM Telmo Menezes  wrote:
> 
>> *> Huge progresses is being made, but we are not at the human level of 
>> generality of intelligence and autonomy. Not even close.*
> 
> Not even close? Don't be silly.  
> 
>> *> I fear that you are falling for the very human bias (I fall for it so 
>> many times myself) of seeing what you want to see.*
> 
> And I fear you are whistling past the graveyard.  
> 
>> *> A machine learning system can only be objectively evaluated by applying 
>> it to data that was not used to train it.*
> 
> I don't know what you mean by that, you're not falling for that old cliché 
> that computers can only do what they're told to do are you? GPT-4 was not 
> trained on the exact questions asked; I suppose you could make a case that 
> some of the training data GPT-4 was educated on was somewhat similar to some 
> of the questions it was asked, but the exact same thing is true for human 
> beings. When you ask questions to a human being some of those questions are 
> somewhat similar to data he was educated on. In fact if some of the data 2 
> intelligences were educated on were not similar they would not be able to ask 
> each other questions because they wouldn't even be able to communicate. 
> 
> 
>> *>Again, it is important to understand what exactly GPT-4 is doing. It is 
>> certainly impressive, but it is not the same thing as a human being taking 
>> an IQ test,*
> 
> So you must think the following fundamental axiom is true:
> 
> *"If a human does something that is smart then the human is smart, but if a 
> computer does the exact same thing then the computer is NOT smart."*
> 
> And from that axiom it's easy to derive the following Corollary:
> 
> *"Computers, buy definition, can never be smart."*
> 
> I think you need to be more careful in picking your fundamental axioms.
> 
>> 
>> *> I do think that passing the Turing test is impressive,*
> 
> Probably the greatest understatement of all time.  
> 
>> *> although it is true that most AI researchers never took it very 
>> seriously,*
> 
> What?!  I'm sure that in their daily lives AI researchers, like every other 
> human being on planet earth, have met people in their life that they 
> considered to be very intelligent, and people they considered to be very 
> stupid, but if they didn't use the Turing Test to make that determination 
> then what on earth did they use? All the Turing test is saying is that you 
> need to play fair, whatever criteria you used to judge the intelligence of 
> your fellow human beings you should also use on a computer to judge its 
> intelligence. 
> 
> It's always the same, I'm old enough to remember when respectable people were 
> saying a computer would never be able to do better than play a mediocre game 
> of chess and certainly never be able to beat a grandmaster at the game. But 
> when a computer did beat a grandmaster at Chess they switched gears and said 
> such an accomplishment means nothing and insisted a computer could never beat 
> a human champion at a game like GO because that really requires true 
> intelligence. Of course when a computer did beat the human champion at GO 
> they switched gears again and said that accomplishment means nothing because 
> a computer would never be able to pass the Turing Test because that really* 
> really* requires true intelligence.  And now that a computer has passed the 
> Turing Test the human response to that accomplishment is utterly predictable. 
>  As I said before, they're whistling past the graveyard.
> 
> ... and so, just seconds before he was vaporized the last surviving human 
> being turned to Mr. Jupiter Brain and said "*I still think I'm more 
> intelligent than you*".
> 
> 
>> *> GPT-4 and image generators are a type of intelligence that we had never 
>> seen before. Maybe the first time such a thing arises in this galaxy or even 
>> universe,*
> 
> I agree, and I can't think of anything more important that happened in my 
> lifetime.  
> 
>  
>>  > *They are probably also similar to stuff that happens in our brain. But 
>> what they are not is something you can be compare to a human mind with an IQ 
>> test in any meaningful way.*
> 
> Not just *an* IQ test but 4 quite different types of IQ tests. And it was a 
> lobotomized version of GPT-4 that was tested that could not input graphs and 
> charts or diagrams so any question that contained them was automatically 
> marked as getting wrong, and yet it STILL got an I

Re: 4 Tests Reveal Bing (GPT 4) ≈ 114 IQ (last test is nuts)

2023-03-18 Thread John Clark
On Sat, Mar 18, 2023 at 5:28 AM Telmo Menezes 
wrote:

*> Huge progresses is being made, but we are not at the human level of
> generality of intelligence and autonomy. Not even close.*
>

Not even close? Don't be silly.

*> I fear that you are falling for the very human bias (I fall for it so
> many times myself) of seeing what you want to see.*
>

And I fear you are whistling past the graveyard.

*> A machine learning system can only be objectively evaluated by applying
> it to data that was not used to train it.*
>

I don't know what you mean by that, you're not falling for that old cliché
that computers can only do what they're told to do are you? GPT-4 was not
trained on the exact questions asked; I suppose you could make a case that
some of the training data GPT-4 was educated on was somewhat similar to
some of the questions it was asked, but the exact same thing is true
for human beings. When you ask questions to a human being some of those
questions are somewhat similar to data he was educated on. In fact if some
of the data 2 intelligences were educated on were not similar they would
not be able to ask each other questions because they wouldn't even be able
to communicate.


*>Again, it is important to understand what exactly GPT-4 is doing. It is
> certainly impressive, but it is not the same thing as a human being taking
> an IQ test,*
>

So you must think the following fundamental axiom is true:

*"If a human does something that is smart then the human is smart, but if a
computer does the exact same thing then the computer is NOT smart." *

And from that axiom it's easy to derive the following Corollary:
*"Computers, buy definition, can never be smart."*

I think you need to be more careful in picking your fundamental axioms.

*> I do think that passing the Turing test is impressive,*
>

Probably the greatest understatement of all time.

* > although it is true that most AI researchers never took it very
> seriously,*
>

What?!  I'm sure that in their daily lives AI researchers, like every other
human being on planet earth, have met people in their life that they
considered to be very intelligent, and people they considered to be very
stupid, but if they didn't use the Turing Test to make that determination
then what on earth did they use? All the Turing test is saying is that you
need to play fair, whatever criteria you used to judge the intelligence of
your fellow human beings you should also use on a computer to judge its
intelligence.

It's always the same, I'm old enough to remember when respectable people
were saying a computer would never be able to do better than play a
mediocre game of chess and certainly never be able to beat a grandmaster at
the game. But when a computer did beat a grandmaster at Chess they switched
gears and said such an accomplishment means nothing and insisted a computer
could never beat a human champion at a game like GO because that really
requires true intelligence. Of course when a computer did beat the human
champion at GO they switched gears again and said that accomplishment means
nothing because a computer would never be able to pass the Turing Test
because that really* really* requires true intelligence.  And now that a
computer has passed the Turing Test the human response to that
accomplishment is utterly predictable.  As I said before, they're whistling
past the graveyard.

... and so, just seconds before he was vaporized the last surviving human
being turned to Mr. Jupiter Brain and said "*I still think I'm more
intelligent than you*".


*> GPT-4 and image generators are a type of intelligence that we had never
> seen before. Maybe the first time such a thing arises in this galaxy or
> even universe,*
>

I agree, and I can't think of anything more important that happened in my
lifetime.



>  > *They are probably also similar to stuff that happens in our brain.
> But what they are not is something you can be compare to a human mind with
> an IQ test in any meaningful way.*
>

Not just *an* IQ test but 4 quite different types of IQ tests. And it was a
lobotomized version of GPT-4 that was tested that could not input graphs
and charts or diagrams so any question that contained them was
automatically marked as getting wrong, and yet it STILL got an IQ of 114.
And the computer completed those tests in seconds while it took humans
hours to do the same thing. Imagine what IQ score it will get in two years,
or even two months.  And you say "not even close"?


> *> That is just junk science.*
>

Huh? Creating "a type of intelligence that we had never seen before, maybe
the first time such a thing arises in this galaxy or even the universe", is
junk science?

John K ClarkSee what's on my new list at  Extropolis
<https://groups.google.com/g/extropolis>
e4v

-- 
You received this message because you are subscr

Re: 4 Tests Reveal Bing (GPT 4) ≈ 114 IQ (last test is nuts)

2023-03-18 Thread Telmo Menezes
Am Sa, 18. Mär 2023, um 08:49, schrieb smitra:

> So, in the video we see that it got a question wrong because it thought 
> that 33 is a prime number. I would be more impressed by a system that 
> may make many more mistakes like that than this GPT system made, but 
> where there is a follow-up conversation where the mistakes are pointed 
> out and the system shows that it has learned and then gets similar 
> questions that it would previously have gotten wrong given the previous 
> answers, correct.

Exactly, very well said.

These models are stateless. Conversations are simulated by re-feeding the 
entire conversation so far over and over. Not only are we humans not stateless, 
but our brain constantly modifies itself at the same time that it is operating. 
And it does this to maintain an ongoing, persistent and coherent model of 
reality. This model includes our internal model of the people we know, of what 
might be going on in their own minds, their long term history and their facial 
expression right now. Memories are formed, that are constantly and coherently 
embedded into this internal map.

John Clark will probably dismiss this as some minor technical hurdle along the 
way to AI glory. I am not so sure.

State and self-modification require recurrence. So does Turing completeness. 
Our brain has recurrent connections, but the vanishing gradient problem seems 
to make them hard to impossible to train at scale with gradient descent. So we 
need an algorithm that works with recurrent connections at huge scales. I bet 
that this algorithm will have to be descentralized, which is to say: operating 
at the local level, of the neighborhood of each node in the network. The 
reasons I bet on an emergent, decentralized learning algorithm:

(1) That's how it works in nature;
(2) Incredibly smart people have been trying very hard for more than half a 
century and the centralized, explicit algorithm that can do what I describe 
above still eludes us -- I am not saying that it shows that such an algo does 
not exist, but I am saying that we are probably too dumb to find it.

Telmo

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/1a012b8d-8451-4dce-b176-8ba9c4c9a767%40app.fastmail.com.


Re: 4 Tests Reveal Bing (GPT 4) ≈ 114 IQ (last test is nuts)

2023-03-18 Thread Telmo Menezes


Am Fr, 17. Mär 2023, um 14:11, schrieb John Clark:
> On Fri, Mar 17, 2023 at 8:26 AM Telmo Menezes  wrote:
> __
> 
>> *> Well, this is Machine Learning 101. If you train a model, it will always 
>> perform better*
> 
> Well yes, if a machine couldn't learn then it wouldn't be intelligent!

Ok John, I have an algorithm here that, can 100x times your money every month 
in the derivatives market. I can demonstrate this by running it on the training 
data. I will sell it to you for 100K. An incredible bargain, considering that 
you can become the richest man in the planet in just a few months. Interested?

>>> >> When human beings take an IQ test that is almost certainly NOT the first 
>>> >> test they've ever had, and like GPT-4 humans are also trained on a huge 
>>> >> amount of data, without it neither you nor GPT-4 would even know how to 
>>> >> read the questions.
>> 
>> *> Yes, but GPT-4 and human brains are very different things.*
> 
> They are similar in some ways, they both process information, and they are 
> different in other ways, one processes information using carbon chemistry and 
> the other processes information using silicon electronics.

Other ways in which they are different are also that: one is a Turing-complete 
self-organizing general learning algorithm with a highly evolved utility 
function, and the other is a model trained with gradient descent and a huge 
amount of data to predict the likelihood of words given a context.

Huge progresses is being made, but we are not at the human level of generality 
of intelligence and autonomy. Not even close. I am sure we can get there, but I 
would argue that too much hype too early hurts the cause. That's how we got the 
first AI winter.

> 
>> *> GPT-4 has superhuman memory capabilities*
> 
> Yes
>  
>> *> and almost certainly subhuman reasoning capabilities.*
> 
> If so I've seen no evidence of it, I have however seen evidence that the 
> opposite is true.  

I fear that you are falling for the very human bias (I fall for it so many 
times myself) of seeing what you want to see.

>> *> I bet that it would take many human lifetimes to actually read the entire 
>> training datatset of GPT-4.*
> 
> I'm sure that's true. Am I supposed to think less of  GPT-4 because of that?

No, but you are supposed to remain a scientist and keep applying the same 
fundamental machine learning principle that brought us all the way to GPT-4:

A machine learning system can only be objectively evaluated by applying it to 
data that was not used to train it. This is the only way to distinguish between 
true generalization (learning) and over-fitting.

>> *Again, it is important to understand what exactly GPT-4 is doing. It is 
>> certainly impressive, but it is not the same thing as a human being taking 
>> an IQ test,*
> 
> It's the same thing if you treat both humans and machines as black boxes and 
> concentrate on what they do. Like it or not that's the only way we can deal  
> with our fellow human beings that we encounter in everyday life, we have no 
> way of knowing what's going on inside their head, all we can do is observe 
> their behavior. Maybe Einstein was an idiot but he just had an ability to 
> push a pen in such a way that he produced brilliant physics papers, but 
> nobody believes that; instead we would say if somebody could write physics 
> papers that were as brilliant as Einstein's then that person would be as 
> smart as Einstein.  
> 
> It's interesting that until a few years ago the Turing Test was not very 
> controversial because most thought it would be centuries before a machine 
> could pass it, and many proclaimed a machine would never be able to pass it, 
> but now that a machine has indeed passed it they say the Turing Test is not 
> important, even though they personally still use the Turing Test a 1000 times 
> a day whenever they judge the conscious state of one of their fellow human 
> beings. Actually if GPT-4 really wanted to fool somebody into thinking it was 
> a human being it would have to dumb itself down.
> 


I do think that passing the Turing test is impressive, although it is true that 
most AI researchers never took it very seriously, it was more of a pop-science 
thing and a niche interest of the chatbot community. Anyway, I agree with you 
that a major milestone has been achieved.

GPT-4 and image generators are a type of intelligence that we had never seen 
before. Maybe the first time such a thing arises in this galaxy or even 
universe, who knows... They are probably also similar to stuff that happens in 
our brain. But what they are not is something you can be compare to a human 
mind with an IQ test in any meaningful way. That is just junk science.

Telmo

Re: 4 Tests Reveal Bing (GPT 4) ≈ 114 IQ (last test is nuts)

2023-03-18 Thread smitra
I'll be convinced if they succeed making such a system do original 
research in, say, theoretical physics or mathematics and get lots of 
high quality research results published that gets los of citations for 
groundbreaking work, rather than small improvements of details of 
existing work. If the next Feynman, Einstein, or John on Neumann is an 
AI system then skeptics can continue to argue that the system is still 
just a dumb computer, but the world will then have moved on with those 
AI systems becoming an ever more important part of society.


I'm not convinced that the present GPT systems are demonstrating 
human-level intellect. Our brains are very powerful computers that we 
can use in a very inefficient way to do academic work with. If the brain 
of a lizard were fully dedicated to analyze and process texts instead of 
controlling its body to react optimally to its environment, it would 
likely also perform extremely well compared to these GPT systems.


The way one would be able to see that the system despite performing 
extremely well does not have the intellectual capabilities of a human 
being, would be to follow up on gaps in its knowledge and see if it can 
learn from its mistakes and master new subjects.


So, in the video we see that it got a question wrong because it thought 
that 33 is a prime number. I would be more impressed by a system that 
may make many more mistakes like that than this GPT system made, but 
where there is a follow-up conversation where the mistakes are pointed 
out and the system shows that it has learned and then gets similar 
questions that it would previously have gotten wrong given the previous 
answers, correct.


Saibal




On 16-03-2023 18:30, John Clark wrote:

Forget the Ukraine war, forget climate change, forget Donald Trump, I
now think GPT-4 is by far the most world shaking event and the most
underreported one.  Many of us have been talking about the singularity
for decades, but now it looks like we're on its doorstep. You've got
to look at this video!

4 Tests Reveal Bing (GPT 4) ≈ 114 IQ (last test is nuts) [1]

John K ClarkSee what's on my new list at  Extropolis [2]

cdk

 --
You received this message because you are subscribed to the Google
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/everything-list/CAJPayv0SOiZQ3fLU-8fQDX5Zfs6%3Dm5koRbJzRV%2Bdg6E-a-mm%3DA%40mail.gmail.com
[3].


Links:
--
[1]
https://www.youtube.com/watch?v=xFvDJnf0GXslist=PLYXp_rV1HrBAOZqPJTOSo91275hKQrfplindex=13
[2] https://groups.google.com/g/extropolis
[3]
https://groups.google.com/d/msgid/everything-list/CAJPayv0SOiZQ3fLU-8fQDX5Zfs6%3Dm5koRbJzRV%2Bdg6E-a-mm%3DA%40mail.gmail.com?utm_medium=email_source=footer


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/6120aa1466fe3983f58c7d8b9f67272c%40zonnet.nl.


Re: 4 Tests Reveal Bing (GPT 4) ≈ 114 IQ (last test is nuts)

2023-03-17 Thread spudboy100 via Everything List
I am far, less, the philosopher then you are. All this peasant (me!) requires 
for both animals and machines is a basic mechanical, cause + effect diagram on 
how both sets attained self-awareness? Call it a working theory. 
It's remarkable that we attained consciousness and even more remarkable that a 
server farm could do so. 
If one is a pantheist, then I suppose one sees consciousness in everything, 
being, "as right as rain." I have no objection to that view either, because 
maybe the pantheists are correct or will be found so? 
Now, by choice, would I prefer to have a Turning-surpassable computer, or 
something that unconsciously, churns out wonderful technologies for humanity? 
I'll pick the later, because we have 8 billion people to chat with on this 
world, and I choose to chat with people. I personally, would like to chat with 
my fellow humans about the new, asteroid mining craft GPT6 just produced, but 
so far, there's zero in the news about that. 
This, comes from my values, but it's non-obligatory that all humans need value 
this as a first. Each to their own. 


-Original Message-
From: Jason Resch 
To: Everything List 
Sent: Fri, Mar 17, 2023 11:32 am
Subject: Re: 4 Tests Reveal Bing (GPT 4) ≈ 114 IQ (last test is nuts)



On Thu, Mar 16, 2023, 6:37 PM spudboy100 via Everything List 
 wrote:

To get to the point, I did advocate for a bit of skepticism for claiming 
consciousness for a computer system, and the retort was from JC that 
essentially, we cannot even define what makes a human conscious, and I am going 
with an au contraries', Pierre! I took me under 10 min to locate a worthy 
article submitted for JC's criticisms. 
Here tis'
What Neuroscientists Think, and Don't Think, About Consciousness - PubMed 
(nih.gov)
So, we are much closer to understand human consciousness. I am ask to to put 
the same effort into how a network developed this in so little time. Our our 
analog chips so mighty in 2022-3???



Neurologists know that neurons and neural activity is correlated with 
consciousness, but for the most part their understanding stops there, (and by 
their own admission.)
I would say neurologists are almost in the worst position to understand 
consciousness as they look at it from the lowest level, the neurons. This is 
like trying to decipher a word processor program by looking at the patterns of 
electrical impulses in the circuits of a computer's CPU.
Here are some quotes about our complete lack of understanding of consciousness 
and the disappointment regarding what help neurology has offered (emphasis 
mine):

“How it is that anything so remarkable as a state of consciousness comes about 
as a result of irritating nervous tissue, is just as unaccountable as the 
appearance of Djin when Aladdin rubbed his lamp.”-- Thomas Huxley in " “Lessons 
in Elementary Psychology,” (1866)
“An electron is neither red nor blue nor any other colour; the same holds for 
the proton, the nucleus of the hydrogen atom. But the union of the two in the 
atom of hydrogen, according to the physicist, produces electromagnetic 
radiation of a certain discrete array of wavelengths. The homogenous 
constituents of this radiation, when separated by a prism or an optical 
grating, stimulate in an observer the sensations of red, green, blue, violet by 
the intermediary of certain physiological processes, whose general character is 
sufficiently well known to assert that they are not red or green or blue, in 
fact that the nervous elements in question display no colour in virtue of their 
being stimulated; the white or gray the nerve cells exhibit whether stimulated 
or not is certainly insignificant in respect of the colour sensation which, in 
the individual whose nerves they are, accompanies their excitation.”-- Erwin 
Schrödinger in "Mind and Matter" (1958)
“Few questions have endured longer or traversed a more perplexing history than 
this, the problem of consciousness and its place in nature. Despite centuries 
of pondering and experiment, of trying to get together two supposed entities 
called mind and matter in one age, subject and object in another, or soul and 
body in still others, despite endless discoursing on the streams, states, or 
contents of consciousness, of distinguishing terms like intuitions, sense data, 
the given, raw feels, the sensa, presentations and representations, the 
sensations, images, and affections of structuralist introspections, the 
evidential data of the scientific positivist, phenomenological fields, the 
apparitions of Hobbes, the phenomena of Kant, the appearances of the idealist, 
the elements of Mach, the phanera of Peirce, or the category errors of Ryle, in 
spite of all of these, the problem of consciousness is still with us.”-- Julian 
Jaynes in "The Origin of Consciousness in the Breakdown of the Bicameral Mind" 
(1976)
“We know that brains are the de facto causal basis of consciousness, but we 
have, it seems, no und

Re: 4 Tests Reveal Bing (GPT 4) ≈ 114 IQ (last test is nuts)

2023-03-17 Thread spudboy100 via Everything List
The authors are professionals. You hold that your knowledge base is is greater 
than the authors?  EM surely could be the root of how humans get conscious, and 
that could easily be the quantum field effect, no, not quantum computing, but 
QFT. Like carbon/water QFET's. 
Do quantum effects play a role in consciousness? – Physics World
We still both for animals and machines need a working analysis. The How 
questions. 
You're not curious about how animals and machinery (if you are correct?) 
attained consciousness? 


-Original Message-
From: John Clark 
To: spudboy...@aol.com
Cc: everything-list@googlegroups.com 
Sent: Fri, Mar 17, 2023 1:34 pm
Subject: Re: 4 Tests Reveal Bing (GPT 4) ≈ 114 IQ (last test is nuts)

On Thu, Mar 16, 2023 at 6:37 PM  wrote:


> I took me under 10 min to locate a worthy article submitted for JC's 
> criticisms. 

I'll bet it took you less than 10 seconds, you popped two or three buzzwords 
into Google  and then you picked the first one that came up and sent it to the 
list without reading a word of it. You didn't miss much, it didn't increase my 
understanding of consciousness one bit, but you claim the article's insights 
brought us "much closer to understanding human consciousness" , well that part 
must've been written in invisible ink because I sure didn't see it. 

> Here tis [drum roll]
> What Neuroscientists Think, and Don't Think, About Consciousness - PubMed 
> (nih.gov)


 Unlike you I did  read the article, or at least I read the abstract, and based 
on that I flat out refuse  to waste my time by reading the entire worthless 
thing, it is after all the purpose of abstracts.

> The approach the majority of neuroscientists take to the question of how 
> consciousness is generated, it is probably fair to say, is to ignore it

Yes it is fair to say that a majority of neuroscientists ignore consciousness 
and they do so because it's the rational thing to do, it would be silly to 
divert finite mental resources from fantastically productive intelligence 
research, especially now when so many dramatic discoveries are being made in 
that area,  to a moribund field like consciousness research that has not 
advanced one nanometer in the last thousand years and will not do any better in 
the next thousand.  

> Neuroscience has furnished evidence that neurons are fundamental to 
> consciousness;

And how did neuroscientists figure that out? By observing that when neurons 
behave in a certain way organisms behave in a certain way and they guess, I 
repeat they guess, that when organisms behave in that certain way then their 
consciousness must be  in a certain state. I personally think that is a good 
guess, it's the same guess we make when we  determine that  our fellow human 
beings are not conscious when they are sleeping or under anesthesia or dead.
And then the author starts babbling about  

> staggeringly complex system of electromagnetic field"

A keen grasp of the obvious. Everything except neutrinos and Dark matter 
Interacts with the electromagnetic field.  

> The EM field literally manifests the computations, or signaling, or 
> information processing/activities performed 

Both computers and neurons produce electromagnetic fields. So what? What does 
that have to do with consciousness? The author never says, he just maintains 
that somehow electromagnetic fields produce consciousness and of course he 
provides no evidence to support his belief.   
John K Clark    See what's on my new list at  Extropolis
txew
w

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/1515344780.998149.1679103545632%40mail.yahoo.com.


Re: 4 Tests Reveal Bing (GPT 4) ≈ 114 IQ (last test is nuts)

2023-03-17 Thread spudboy100 via Everything List
EM fields might indeed have an effect on the human  neuro-system. I am merely 
asking how would consciousness develop, unplanned in a server farm? 
Humans, yes. NIMH » Brain Stimulation Therapies (nih.gov)
In a computer? Maybe yes.A theory of consciousness from a theoretical computer 
science perspective: Insights from the Conscious Turing Machine | PNAS

If this guy is correct, then simply learning grants consciousness.   Learning 
May Be the Key to the Evolution of Consciousness | Psychology Today
I have questions, but because I do, doesn't make me the arbiter of computer 
consciousness. 
I just don't choose to blow past it so readily as JC does. 

-Original Message-
From: Stathis Papaioannou 
To: everything-list@googlegroups.com
Sent: Fri, Mar 17, 2023 4:23 am
Subject: Re: 4 Tests Reveal Bing (GPT 4) ≈ 114 IQ (last test is nuts)



On Fri, 17 Mar 2023 at 07:37, spudboy100 via Everything List 
 wrote:

To get to the point, I did advocate for a bit of skepticism for claiming 
consciousness for a computer system, and the retort was from JC that 
essentially, we cannot even define what makes a human conscious, and I am going 
with an au contraries', Pierre! I took me under 10 min to locate a worthy 
article submitted for JC's criticisms. 
Here tis'
What Neuroscientists Think, and Don't Think, About Consciousness - PubMed 
(nih.gov)
So, we are much closer to understand human consciousness. I am ask to to put 
the same effort into how a network developed this in so little time. Our our 
analog chips so mighty in 2022-3???

What produces consciousness is consciousness-associated behaviour. If it’s EM 
fields (it probably isn’t) then EM fields would have some effect on behaviour, 
and if this effect could be replicated by some other means, the consciousness 
would also be replicated.


-- 
Stathis Papaioannou-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAH%3D2ypXKDU7uDj3UHaZXO-0pExsLgbANTcdM76KctK51PvCLSw%40mail.gmail.com.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/144434129.999101.1679102406293%40mail.yahoo.com.


Re: 4 Tests Reveal Bing (GPT 4) ≈ 114 IQ (last test is nuts)

2023-03-17 Thread John Clark
On Thu, Mar 16, 2023 at 6:37 PM  wrote:

*> I took me under 10 min to locate a worthy article submitted for JC's
> criticisms. *
>

I'll bet it took you less than 10 seconds, you popped two or three
buzzwords into Google  and then you picked the first one that came up and
sent it to the list without reading a word of it. You didn't miss much, it
didn't increase my understanding of consciousness one bit, but you claim
the article's insights brought us "*much closer to understanding human
consciousness*" , well that part must've been written in invisible ink
because I sure didn't see it.

*> Here tis* [drum roll]
>
> > What Neuroscientists Think, and Don't Think, About Consciousness -
> PubMed (nih.gov) 
>
>
 Unlike you I did  read the article, or at least I read the abstract, and
based on that I flat out refuse  to waste my time by reading the entire
worthless thing, it is after all the purpose of abstracts.

*> The approach the majority of neuroscientists take to the question of how
> consciousness is generated, it is probably fair to say, is to ignore it*


Yes it is fair to say that a majority of neuroscientists ignore
consciousness and they do so because it's the rational thing to do, it
would be silly to divert finite mental resources from fantastically
productive intelligence research, especially now when so many dramatic
discoveries are being made in that area,  to a moribund field like
consciousness research that has not advanced one nanometer in the last
thousand years and will not do any better in the next thousand.

*> Neuroscience has furnished evidence that neurons are fundamental to
> consciousness;*


And how did neuroscientists figure that out? By observing that when neurons
behave in a certain way organisms behave in a certain way and they guess, I
repeat *they guess*, that when organisms behave in that certain way then their
consciousness must be  in a certain state. I personally think that is a
good guess, it's the same guess we make when we  determine that  our fellow
human beings are not conscious when they are sleeping or under anesthesia
or dead.

And then the author starts babbling about

*> staggeringly complex system of electromagnetic field"*


A keen grasp of the obvious. Everything except neutrinos and Dark matter
Interacts with the electromagnetic field.

*> The EM field literally manifests the computations, or signaling, or
> information processing/activities performed *


Both computers and neurons produce electromagnetic fields. So what? What
does that have to do with consciousness? The author never says, he just
maintains that somehow electromagnetic fields produce consciousness and of
course he provides no evidence to support his belief.

John K ClarkSee what's on my new list at  Extropolis

txew
w

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv1pdeG-FwZ1JbN5hbpKKBAnJF0T-b8T7gsq82w%2BL-AkJw%40mail.gmail.com.


Re: 4 Tests Reveal Bing (GPT 4) ≈ 114 IQ (last test is nuts)

2023-03-17 Thread Jason Resch
disagree. Some like Dennett
say that is as far as we can go and that will answer all the questions we
have about consciousness. Others like Chalmers say that will still leave
the "hard problem" unresolved.

I see merit in both aspects of their argument. I agree with Dennett that
consciousness is nothing other than awareness. But I also agree with
Chalmers that even with such an objective and complete brain map, there
will remain some things that are unexplainable/shareable (in my opinion due
in part to similar reasons as Gödelian incompleteness). First-person
experiences are not explainable in third-person terms and can only be
understood/experienced/known by being the system that has that particular
experience.

Jason




> -Original Message-
> From: John Clark 
> To: spudboy...@aol.com
> Cc: everything-list@googlegroups.com 
> Sent: Thu, Mar 16, 2023 5:55 pm
> Subject: Re: 4 Tests Reveal Bing (GPT 4) ≈ 114 IQ (last test is nuts)
>
> On Thu, Mar 16, 2023 at 4:50 PM  wrote:
>
> *> America executes prisoners for capital murder. So, legally, if a
> murderer died 50 years ago for a capital crime, does that mean, once,
> revived, *
>
>
> To my knowledge no executed prisoner has ever been cryogenically
> preserved, however Joseph Paul Jernigan was executed by lethal injection
> in 1981 and he became part of the "Visible Human Project". His body was
> sliced into 1871 1 millimeter thick slices. and each slice was then
> photographed with a very high resolution camera. I've wondered if there was
> enough information preserved in those photographs to upload him, probably
> not but maybe. You can watch a one minute video of a journey through Mr. J
> ernigan's body here.
>
> The visible human project - Male (HD)
> <https://www.youtube.com/watch?v=dPPjUtiAGYs>
>
> A few years later they took even higher resolution photographs of a woman
> who died of a heart attack and they used even thinner slices, only 0.33
> millimeters thick
>
> Visual Human Project Female <https://www.youtube.com/watch?v=3atjsVqFlhs>
>
> *> they are no longer liable because of Double Jeopardy and that their
> victims will also be revived? A Civil Case then??*
>
>
> As I've said, I'm not a lawyer.
> John K ClarkSee what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
> ws8
>
> u6c
>
>
>
>
> -Original Message-
> From: John Clark 
> To: 'Brent Meeker' via Everything List 
> Sent: Thu, Mar 16, 2023 1:30 pm
> Subject: 4 Tests Reveal Bing (GPT 4) ≈ 114 IQ (last test is nuts)
>
> Forget the Ukraine war, forget climate change, forget Donald Trump, I now
> think GPT-4 is by far the most world shaking event and the most
> underreported one.  Many of us have been talking about the singularity for
> decades, but now it looks like we're on its doorstep. You've got to look at
> this video!
>
>
> 4 Tests Reveal Bing (GPT 4) ≈ 114 IQ (last test is nuts)
> <https://www.youtube.com/watch?v=xFvDJnf0GXs=PLYXp_rV1HrBAOZqPJTOSo91275hKQrfpl=13>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv2neEP50SN7NCGRqtm1uzwNaE668DFu4-QdQa-OGFei8A%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv2neEP50SN7NCGRqtm1uzwNaE668DFu4-QdQa-OGFei8A%40mail.gmail.com?utm_medium=email_source=footer>
> .
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/382281143.709221.1679006249871%40mail.yahoo.com
> <https://groups.google.com/d/msgid/everything-list/382281143.709221.1679006249871%40mail.yahoo.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUju2F-JRy%2B0Updaqw879kbQAfj8WBkQ7z36_%3DLFFnAZ0g%40mail.gmail.com.


Re: 4 Tests Reveal Bing (GPT 4) ≈ 114 IQ (last test is nuts)

2023-03-17 Thread John Clark
On Fri, Mar 17, 2023 at 8:26 AM Telmo Menezes 
wrote:

*> Well, this is Machine Learning 101. If you train a model, it will always
> perform better*


Well yes, if a machine couldn't learn then it wouldn't be intelligent!

>> When human beings take an IQ test that is almost certainly NOT the first
>> test they've ever had, and like GPT-4 humans are also trained on a huge
>> amount of data, without it neither you nor GPT-4 would even know how to
>> read the questions.
>
>
> *> Yes, but GPT-4 and human brains are very different things.*
>

They are similar in some ways, they both process information, and they are
different in other ways, one processes information using carbon chemistry
and the other processes information using silicon electronics.

*> GPT-4 has superhuman memory capabilities*
>

Yes


> *> and almost certainly subhuman reasoning capabilities.*
>

If so I've seen no evidence of it, I have however seen evidence that the
opposite is true.


> *> I bet that it would take many human lifetimes to actually read the
> entire training datatset of GPT-4.*
>

I'm sure that's true. Am I supposed to think less of  GPT-4 because of
that?

*Again, it is important to understand what exactly GPT-4 is doing. It is
> certainly impressive, but it is not the same thing as a human being taking
> an IQ test,*
>

It's the same thing if you treat both humans and machines as black boxes
and concentrate on what they do. Like it or not that's the only way we can
deal  with our fellow human beings that we encounter in everyday life, we
have no way of knowing what's going on inside their head, all we can do is
observe their behavior. Maybe Einstein was an idiot but he just had an
ability to push a pen in such a way that he produced brilliant physics
papers, but nobody believes that; instead we would say if somebody could
write physics papers that were as brilliant as Einstein's then that person
would be as smart as Einstein.

It's interesting that until a few years ago the Turing Test was not very
controversial because most thought it would be centuries before a machine
could pass it, and many proclaimed a machine would never be able to pass
it, but now that a machine has indeed passed it they say the Turing Test is
not important, even though they personally still use the Turing Test a 1000
times a day whenever they judge the conscious state of one of their fellow
human beings. Actually if GPT-4 really wanted to fool somebody into
thinking it was a human being it would have to dumb itself down.

John K ClarkSee what's on my new list at  Extropolis
<https://groups.google.com/g/extropolis>

h5z

8bm


>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv0pab3tbnTJGL08mKGeJHw1ufW0mR1skgdemMmCaNMWPw%40mail.gmail.com.


Re: 4 Tests Reveal Bing (GPT 4) ≈ 114 IQ (last test is nuts)

2023-03-17 Thread Telmo Menezes


Am Fr, 17. Mär 2023, um 12:52, schrieb John Clark:
> 
> On Fri, Mar 17, 2023 at 3:45 AM Telmo Menezes  wrote:
> __
> 
>>> >> Forget the Ukraine war, forget climate change, forget Donald Trump, I 
>>> >> now think GPT-4 is by far the most world shaking event and the most 
>>> >> underreported one.  Many of us have been talking about the singularity 
>>> >> for decades, but now it looks like we're on its doorstep. You've got to 
>>> >> look at this video!
>>> 4 Tests Reveal Bing (GPT 4) ≈ 114 IQ (last test is nuts) 
>>> <https://www.youtube.com/watch?v=xFvDJnf0GXs=PLYXp_rV1HrBAOZqPJTOSo91275hKQrfpl=13>
>> 
>> *> One crucial question here is: did the GPT 4 training set include tests 
>> like the ones being solved?*
> 
> I don't know, GPT-4 is trained on a huge amount of data so probably, but why 
> is that a crucial question? 

Well, this is Machine Learning 101. If you train a model, it will always 
perform better in corpus. Often MUCH better. That is why it is Machine Learning 
101 to divide your dataset into training and testing (usually a 66%/33% or 
75%/25% split). Try to publish a ML learning paper where you evaluate your 
model based on in corpus data and see what they tell you... I am simply 
insisting on the conventional scientific standard of the field to evaluate this 
"≈ 114 IQ" claim.

> When human beings take an IQ test that is almost certainly NOT the first test 
> they've ever had, and like GPT-4 humans are also trained on a huge amount of 
> data, without it neither you nor GPT-4 would even know how to read the 
> questions.

Yes, but GPT-4 and human brains are very different things. GPT-4 has superhuman 
memory capabilities and almost certainly subhuman reasoning capabilities. So 
saying that "ah yes, but humans also see IQ tests before taking one" is 
comparing apples to oranges, firstly beacuse GPT-4 relies much more on brute 
force memorization and secondly because its training corpus can be incredibly 
exhaustive. I strongly suspect that GPT-4 is trained with essentially all 
publically available text ever produced. I bet that it would take many human 
lifetimes to actually read the entire training datatset of GPT-4.

I have a leak of the training data of GPT-2 and that already appears to be the 
case.


>   And speaking of that, in one of those 4 tests humans had deliberately 
> written the questions in such a convoluted way that it was difficult to even 
> know what the question was, much less find the answer,  but GPT-4 got it 
> right nevertheless.  And although GPT-4 can input graphs and diagrams the 
> version that was taking the 4 different IQ tests could not, so whenever an IQ 
> problem contained one of those GPT-4 was automatically marked as getting the 
> answer wrong, and yet even with that severe handicap and even with being 
> unable to contact the Internet it STILL managed to get an IQ of 114 !  And if 
> you had told me in 2021 that a machine would be capable of doing that in 2023 
> I would've said you were crazy.

Again, it is important to understand what exactly GPT-4 is doing. It is 
certainly impressive, but it is not the same thing as a human being taking an 
IQ test, and this is important because it has profound implications on what an 
approach like GPT-4 can be generalized to do and what its fundamental 
limitations are.

Telmo

>  
> 
>  John K ClarkSee what's on my new list at  Extropolis 
> <https://groups.google.com/g/extropolis>
> 8gw
> 
> 9x
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/CAJPayv2-GRdQ5s%2BQ7DbVnR54m1ecwV4ukQMs0_W1%2BnHaH2k0Qg%40mail.gmail.com
>  
> <https://groups.google.com/d/msgid/everything-list/CAJPayv2-GRdQ5s%2BQ7DbVnR54m1ecwV4ukQMs0_W1%2BnHaH2k0Qg%40mail.gmail.com?utm_medium=email_source=footer>.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/18b8d637-3566-44d2-b9c0-ca5f2ebb5796%40app.fastmail.com.


Re: 4 Tests Reveal Bing (GPT 4) ≈ 114 IQ (last test is nuts)

2023-03-17 Thread John Clark
On Fri, Mar 17, 2023 at 3:45 AM Telmo Menezes 
wrote:

>> Forget the Ukraine war, forget climate change, forget Donald Trump, I
>> now think GPT-4 is by far the most world shaking event and the most
>> underreported one.  Many of us have been talking about the singularity for
>> decades, but now it looks like we're on its doorstep. You've got to look at
>> this video!
>> 4 Tests Reveal Bing (GPT 4) ≈ 114 IQ (last test is nuts)
>> <https://www.youtube.com/watch?v=xFvDJnf0GXs=PLYXp_rV1HrBAOZqPJTOSo91275hKQrfpl=13>
>
>
>
> *> One crucial question here is: did the GPT 4 training set include tests
> like the ones being solved?*
>

I don't know, GPT-4 is trained on a huge amount of data so probably, but
why is that a crucial question?  When human beings take an IQ test that is
almost certainly NOT the first test they've ever had, and like GPT-4 humans
are also trained on a huge amount of data, without it neither you nor GPT-4
would even know how to read the questions.  And speaking of that, in one of
those 4 tests humans had deliberately written the questions in such a
convoluted way that it was difficult to even know what the question was,
much less find the answer,  but GPT-4 got it right nevertheless.  And
although GPT-4 can input graphs and diagrams the version that was taking
the 4 different IQ tests could not, so whenever an IQ problem contained one
of those GPT-4 was automatically marked as getting the answer wrong, and
yet even with that severe handicap and even with being unable to contact
the Internet it STILL managed to get an IQ of 114 !  And if you had told me
in 2021 that a machine would be capable of doing that in 2023 I would've
said you were crazy.

 John K ClarkSee what's on my new list at  Extropolis
<https://groups.google.com/g/extropolis>
8gw

9x

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv2-GRdQ5s%2BQ7DbVnR54m1ecwV4ukQMs0_W1%2BnHaH2k0Qg%40mail.gmail.com.


Re: 4 Tests Reveal Bing (GPT 4) ≈ 114 IQ (last test is nuts)

2023-03-17 Thread Stathis Papaioannou
On Fri, 17 Mar 2023 at 07:37, spudboy100 via Everything List <
everything-list@googlegroups.com> wrote:

> To get to the point, I did advocate for a bit of skepticism for claiming
> consciousness for a computer system, and the retort was from JC that
> essentially, we cannot even define what makes a human conscious, and I am
> going with an au contraries', Pierre! I took me under 10 min to locate a
> worthy article submitted for JC's criticisms.
>
> Here tis'
>
> What Neuroscientists Think, and Don't Think, About Consciousness - PubMed
> (nih.gov) 
>
> So, we are much closer to understand human consciousness. I am ask to to
> put the same effort into how a network developed this in so little time.
> Our our analog chips so mighty in 2022-3???
>

What produces consciousness is consciousness-associated behaviour. If it’s
EM fields (it probably isn’t) then EM fields would have some effect on
behaviour, and if this effect could be replicated by some other means, the
consciousness would also be replicated.

-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAH%3D2ypXKDU7uDj3UHaZXO-0pExsLgbANTcdM76KctK51PvCLSw%40mail.gmail.com.


Re: 4 Tests Reveal Bing (GPT 4) ≈ 114 IQ (last test is nuts)

2023-03-17 Thread Telmo Menezes


Am Do, 16. Mär 2023, um 18:30, schrieb John Clark:
> Forget the Ukraine war, forget climate change, forget Donald Trump, I now 
> think GPT-4 is by far the most world shaking event and the most underreported 
> one.  Many of us have been talking about the singularity for decades, but now 
> it looks like we're on its doorstep. You've got to look at this video!
> 
> 
> 4 Tests Reveal Bing (GPT 4) ≈ 114 IQ (last test is nuts) 
> <https://www.youtube.com/watch?v=xFvDJnf0GXs=PLYXp_rV1HrBAOZqPJTOSo91275hKQrfpl=13>

One crucial question here is: did the GPT 4 training set include tests like the 
ones being solved?

Telmo

> John K ClarkSee what's on my new list at  Extropolis 
> <https://groups.google.com/g/extropolis>
> 
> cdk
> 
> 
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/CAJPayv0SOiZQ3fLU-8fQDX5Zfs6%3Dm5koRbJzRV%2Bdg6E-a-mm%3DA%40mail.gmail.com
>  
> <https://groups.google.com/d/msgid/everything-list/CAJPayv0SOiZQ3fLU-8fQDX5Zfs6%3Dm5koRbJzRV%2Bdg6E-a-mm%3DA%40mail.gmail.com?utm_medium=email_source=footer>.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/544150d3-ff4f-46aa-959f-9acd005f6818%40app.fastmail.com.


Re: 4 Tests Reveal Bing (GPT 4) ≈ 114 IQ (last test is nuts)

2023-03-16 Thread spudboy100 via Everything List
To get to the point, I did advocate for a bit of skepticism for claiming 
consciousness for a computer system, and the retort was from JC that 
essentially, we cannot even define what makes a human conscious, and I am going 
with an au contraries', Pierre! I took me under 10 min to locate a worthy 
article submitted for JC's criticisms. 
Here tis'
What Neuroscientists Think, and Don't Think, About Consciousness - PubMed 
(nih.gov)
So, we are much closer to understand human consciousness. I am ask to to put 
the same effort into how a network developed this in so little time. Our our 
analog chips so mighty in 2022-3???

-Original Message-
From: John Clark 
To: spudboy...@aol.com
Cc: everything-list@googlegroups.com 
Sent: Thu, Mar 16, 2023 5:55 pm
Subject: Re: 4 Tests Reveal Bing (GPT 4) ≈ 114 IQ (last test is nuts)

On Thu, Mar 16, 2023 at 4:50 PM  wrote:


> America executes prisoners for capital murder. So, legally, if a murderer 
> died 50 years ago for a capital crime, does that mean, once, revived, 

To my knowledge no executed prisoner has ever been cryogenically preserved, 
however Joseph Paul Jernigan was executed by lethal injection in 1981 and he 
became part of the "Visible Human Project". His body was sliced into 1871 1 
millimeter thick slices. and each slice was then photographed with a very high 
resolution camera. I've wondered if there was enough information preserved in 
those photographs to upload him, probably not but maybe. You can watch a one 
minute video of a journey through Mr. Jernigan's body here. 
The visible human project - Male (HD)

A few years later they took even higher resolution photographs of a woman who 
died of a heart attack and they used even thinner slices, only 0.33 millimeters 
thick 
Visual Human Project Female 


> they are no longer liable because of Double Jeopardy and that their victims 
> will also be revived? A Civil Case then??

As I've said, I'm not a lawyer.  John K Clark    See what's on my new list at  
Extropolis
ws8
u6c




-Original Message-
From: John Clark 
To: 'Brent Meeker' via Everything List 
Sent: Thu, Mar 16, 2023 1:30 pm
Subject: 4 Tests Reveal Bing (GPT 4) ≈ 114 IQ (last test is nuts)

Forget the Ukraine war, forget climate change, forget Donald Trump, I now think 
GPT-4 is by far the most world shaking event and the most underreported one.  
Many of us have been talking about the singularity for decades, but now it 
looks like we're on its doorstep. You've got to look at this video!

4 Tests Reveal Bing (GPT 4) ≈ 114 IQ (last test is nuts)


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv2neEP50SN7NCGRqtm1uzwNaE668DFu4-QdQa-OGFei8A%40mail.gmail.com.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/382281143.709221.1679006249871%40mail.yahoo.com.


Re: 4 Tests Reveal Bing (GPT 4) ≈ 114 IQ (last test is nuts)

2023-03-16 Thread spudboy100 via Everything List
Yes. I actually had, long ago, somebody who contributed their mortal remains to 
science, only, to have one delicious sub-slice winding up in our computer 
graphics class. A diameter slice of their skull. The computer graphics class 
was wasted on myself since artwork was better left for the artists. For the 
contributor, it was no great contribution to science since scientists can also 
be untrustworthy. 
I have a name for the slice I encountered long ago. I thought this being could 
also be a great sidekick, if I could write the comic lines for the voice? 



-Original Message-
From: John Clark 
To: spudboy...@aol.com
Cc: everything-list@googlegroups.com 
Sent: Thu, Mar 16, 2023 5:55 pm
Subject: Re: 4 Tests Reveal Bing (GPT 4) ≈ 114 IQ (last test is nuts)

On Thu, Mar 16, 2023 at 4:50 PM  wrote:


> America executes prisoners for capital murder. So, legally, if a murderer 
> died 50 years ago for a capital crime, does that mean, once, revived, 

To my knowledge no executed prisoner has ever been cryogenically preserved, 
however Joseph Paul Jernigan was executed by lethal injection in 1981 and he 
became part of the "Visible Human Project". His body was sliced into 1871 1 
millimeter thick slices. and each slice was then photographed with a very high 
resolution camera. I've wondered if there was enough information preserved in 
those photographs to upload him, probably not but maybe. You can watch a one 
minute video of a journey through Mr. Jernigan's body here. 
The visible human project - Male (HD)

A few years later they took even higher resolution photographs of a woman who 
died of a heart attack and they used even thinner slices, only 0.33 millimeters 
thick 
Visual Human Project Female 


> they are no longer liable because of Double Jeopardy and that their victims 
> will also be revived? A Civil Case then??

As I've said, I'm not a lawyer.  John K Clark    See what's on my new list at  
Extropolis
ws8
u6c




-Original Message-
From: John Clark 
To: 'Brent Meeker' via Everything List 
Sent: Thu, Mar 16, 2023 1:30 pm
Subject: 4 Tests Reveal Bing (GPT 4) ≈ 114 IQ (last test is nuts)

Forget the Ukraine war, forget climate change, forget Donald Trump, I now think 
GPT-4 is by far the most world shaking event and the most underreported one.  
Many of us have been talking about the singularity for decades, but now it 
looks like we're on its doorstep. You've got to look at this video!

4 Tests Reveal Bing (GPT 4) ≈ 114 IQ (last test is nuts)


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/120189354.708618.1679005831182%40mail.yahoo.com.


Re: 4 Tests Reveal Bing (GPT 4) ≈ 114 IQ (last test is nuts)

2023-03-16 Thread John Clark
On Thu, Mar 16, 2023 at 4:50 PM  wrote:

*> America executes prisoners for capital murder. So, legally, if a
> murderer died 50 years ago for a capital crime, does that mean, once,
> revived, *


To my knowledge no executed prisoner has ever been cryogenically preserved,
however Joseph Paul Jernigan was executed by lethal injection in 1981 and
he became part of the "Visible Human Project". His body was sliced into
1871 1 millimeter thick slices. and each slice was then photographed with a
very high resolution camera. I've wondered if there was enough information
preserved in those photographs to upload him, probably not but maybe. You
can watch a one minute video of a journey through Mr. Jernigan's body here.

The visible human project - Male (HD)
<https://www.youtube.com/watch?v=dPPjUtiAGYs>

A few years later they took even higher resolution photographs of a woman
who died of a heart attack and they used even thinner slices, only 0.33
millimeters thick

Visual Human Project Female <https://www.youtube.com/watch?v=3atjsVqFlhs>

*> they are no longer liable because of Double Jeopardy and that their
> victims will also be revived? A Civil Case then??*


As I've said, I'm not a lawyer.
John K ClarkSee what's on my new list at  Extropolis
<https://groups.google.com/g/extropolis>
ws8

u6c




> -Original Message-
> From: John Clark 
> To: 'Brent Meeker' via Everything List 
> Sent: Thu, Mar 16, 2023 1:30 pm
> Subject: 4 Tests Reveal Bing (GPT 4) ≈ 114 IQ (last test is nuts)
>
> Forget the Ukraine war, forget climate change, forget Donald Trump, I now
> think GPT-4 is by far the most world shaking event and the most
> underreported one.  Many of us have been talking about the singularity for
> decades, but now it looks like we're on its doorstep. You've got to look at
> this video!
>
>
> 4 Tests Reveal Bing (GPT 4) ≈ 114 IQ (last test is nuts)
> <https://www.youtube.com/watch?v=xFvDJnf0GXs=PLYXp_rV1HrBAOZqPJTOSo91275hKQrfpl=13>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv2neEP50SN7NCGRqtm1uzwNaE668DFu4-QdQa-OGFei8A%40mail.gmail.com.


Re: 4 Tests Reveal Bing (GPT 4) ≈ 114 IQ (last test is nuts)

2023-03-16 Thread spudboy100 via Everything List
I have heard this prediction over that last week from other experts, than JC. 
No to 2050/2045 and no to 2030. This also came out in the last 48 hours. 
Kurzweil is an inventor, not a scientist. Having said that he was an inventor.
This might be of interest to JC? Or not?

Humans will achieve immortality in 7 years, says futurist
Humans will achieve immortality in 7 years, says futurist 
(interestingengineering.com)

My wandering thought is, IF TRUE (eyes rolling, tongue in cheek), America 
executes prisoners for capital murder. So, legally, if a murderer died 50 years 
ago for a capital crime, does that mean, once, revived, they are no longer 
liable because of Double Jeopardy and that their victims will also be revived? 
A Civil Case then??
The other thought was a 2nd Hitler War, with Joe Stalin and Mao teamed up 
against the rest of us? Churchill, George Clemenceau, William Tecumseh Sherman 
(strategy that beat The South), José Norton de Matos, and a now militarized 
Ganndhi? If we're all electronic/quantum returnees, how would these wars be 
fought? 





-Original Message-
From: John Clark 
To: 'Brent Meeker' via Everything List 
Sent: Thu, Mar 16, 2023 1:30 pm
Subject: 4 Tests Reveal Bing (GPT 4) ≈ 114 IQ (last test is nuts)

Forget the Ukraine war, forget climate change, forget Donald Trump, I now think 
GPT-4 is by far the most world shaking event and the most underreported one.  
Many of us have been talking about the singularity for decades, but now it 
looks like we're on its doorstep. You've got to look at this video!

4 Tests Reveal Bing (GPT 4) ≈ 114 IQ (last test is nuts)
John K Clark    See what's on my new list at  Extropolis

cdk

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv0SOiZQ3fLU-8fQDX5Zfs6%3Dm5koRbJzRV%2Bdg6E-a-mm%3DA%40mail.gmail.com.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/1945883138.681055.1678999849749%40mail.yahoo.com.


4 Tests Reveal Bing (GPT 4) ≈ 114 IQ (last test is nuts)

2023-03-16 Thread John Clark
Forget the Ukraine war, forget climate change, forget Donald Trump, I now
think GPT-4 is by far the most world shaking event and the most
underreported one.  Many of us have been talking about the singularity for
decades, but now it looks like we're on its doorstep. You've got to look at
this video!


4 Tests Reveal Bing (GPT 4) ≈ 114 IQ (last test is nuts)
<https://www.youtube.com/watch?v=xFvDJnf0GXs=PLYXp_rV1HrBAOZqPJTOSo91275hKQrfpl=13>

John K ClarkSee what's on my new list at  Extropolis
<https://groups.google.com/g/extropolis>

cdk

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv0SOiZQ3fLU-8fQDX5Zfs6%3Dm5koRbJzRV%2Bdg6E-a-mm%3DA%40mail.gmail.com.


Re: The original Turing Test

2023-03-01 Thread spudboy100 via Everything List
Turing would have been enthused to the Max. Now could he have done something to 
test it out?? What would Turing 1950 have concluded? 


-Original Message-
From: John Clark 
To: 'Brent Meeker' via Everything List 
Sent: Wed, Mar 1, 2023 9:50 am
Subject: The original Turing Test

In Alan Turing's original paper in which he propose the thing that we now call 
the Turing Test he gave this example of the sort of thing we might ask a 
machine that claims to be intelligent and the answer we might expect to receive:
Question: Please write me a sonnet on the subject of the Forth Bridge.

Answer: Count me out on this one. I never could write poetry.
So I decided to give the the same order to ChatGPT and less than five seconds 
it did so. 
Question: Please write me a sonnet on the subject of the Forth Bridge.

Answer:  Amidst the Scottish skies and bracing breeze,
A structure stands with towering grace and might,
A symbol of Victorian expertise,
A masterpiece of steel, a wondrous sight.
The Forth Bridge, a marvel of its time,
 A cantilevered railway, bold and strong, 
Its spans reach far, a feat of man's design, 
Defying nature's forces all day long.

For over a century it has stood, 
A steadfast sentinel of the North Sea,
Its steel beams gleaming bright in sun and flood,
A tribute to the skill of industry.

Oh, Forth Bridge, a wonder of the age, 
A testament to man's unyielding rage.

I'm the world's worst poet so I'm not a good judge of the above's artistic 
value but I'm certain it's better than what Alan Turing would have expected a 
machine that claims to be intelligent would say. Now if it could just do 
arithmetic as well as a two dollar calculator!John K Clark    See what's on my 
new list at  Extropolis56w -- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv0s3AEXh7O%3DcUGz%3D8cMGbnAZ6xYhFCaHhYyyvnFtiRo3g%40mail.gmail.com.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/1584059964.1614075.1677697389972%40mail.yahoo.com.


Re: The original Turing Test

2023-03-01 Thread Jason Resch
On Wed, Mar 1, 2023, 10:06 AM Telmo Menezes  wrote:

>
>
> Am Mi, 1. Mär 2023, um 15:50, schrieb John Clark:
>
> In Alan Turing's original paper in which he propose the thing that we now
> call the Turing Test he gave this example of the sort of thing we might
> ask a machine that claims to be intelligent and the answer we might expect
> to receive:
>
> Question: Please write me a sonnet on the subject of the Forth Bridge.
>
> Answer: *Count me out on this one. I never could write poetry.*
>
> *So I decided to give the the same order to ChatGPT and less than five
> seconds it did so. *
>
> Question: Please write me a sonnet on the subject of the Forth Bridge.
>
>
>
>
> *Answer:  Amidst the Scottish skies and bracing breeze,A structure stands
> with towering grace and might,A symbol of Victorian expertise,A masterpiece
> of steel, a wondrous sight.*
>
>
>
>
>
>
>
>
>
>
>
>
> *The Forth Bridge, a marvel of its time, A cantilevered railway, bold and
> strong, Its spans reach far, a feat of man's design, Defying nature's
> forces all day long.For over a century it has stood, A steadfast sentinel
> of the North Sea,Its steel beams gleaming bright in sun and flood,A tribute
> to the skill of industry.Oh, Forth Bridge, a wonder of the age, A testament
> to man's unyielding rage.*
>
> I'm the world's worst poet so I'm not a good judge of the above's
> artistic value but I'm certain it's better than what Alan Turing would have
> expected a machine that claims to be intelligent would say. Now if it could
> just do arithmetic as well as a two dollar calculator!
>
>
> It has long been observed in the field of AI that often things that look
> hard end up being the easy ones, and things that look easy turn out to be
> surprisingly hard.
>
> We now have amazing statistical AI (current stuff) and amazing symbolic AI
> (chess engines, etc), but we do not really know how to combine the two. I
> tend to connect this to the ideas of Daniel Kahneman ("Thinking, fast and
> slow"). I think that statistical (roughly) corresponds to the fast mode of
> thinking and symbolic to the slow mode. Or maybe this is a bit simplistic
> and silly, but the important thing is that we humans can combine and switch
> between the two modes of cognition, and I think this is crucial for AGI.
>

I agree there's some relation.

Everything ChatGPT does is in a single evaluation of its neural network. It
is like being asked to look at a chess board and immediately name the first
best move that comes to your mind instinctively, rather than trying to
progress through the game tree iteratively and see where it goes.

This is also why it fails at multiplication, which is what computers
scientists call an Order-N-squared problem, meaning the number of steps an
algorithm must takes to perform multiplication grows roughly by N^2 as the
input size of the multiplicands grows by N.

Since ChatGPT always uses a constant number of steps in it's processing,
there is some size N where it necessarily fails at multiplication. This is
true for any machine/algorithm that use a constant number of steps.

The solution for ChatGPT, as well as for humans, is to break the problem
down into manageable steps and work on them piece meal until the problem is
solved. This is why mathematicians need chalkboards to do what they do,
multiplying big numbers or solving proofs, often requires many steps and
can't be solved by intuition alone.

Google discovered this with their alpha zero chess/go AI: It's single
network evaluation of the best move had an ELO score of around 3000, still
better than the best humans, but not by much. But by letting it iterate
through different top moves and see which course of action faired best
overtime, it increased it's level of play to 5000 ELO, well into super
human territory.

For reference a beginner chess player has an ELO of 1000, a decent amateur
around 1200, a chess master 1800, grand masters start around 2400, while
the best human players in history have been around 2800.

So I think we need to make a "meta ChatGPT wrapper" that asks ChatGPT how
to break down a problem into smaller steps, query ChatGPT multiple times to
solve each step and also double check it got the right result along each
step in the process, if it second guesses itself, ask it to break down that
intermediate step, and so on recursively, until the problem is solved.

I think this replicates the thought processes for a wide range of human
intellectual activity and I believe it would widen the class of problems
ChatGPT could effectively solve.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUiYTDg5X%3DKKg0SKNnfZDWx0BBULB9U2Pk%2Bc2pAtMyNjZw%40mail.gmail.com.


Re: The original Turing Test

2023-03-01 Thread Telmo Menezes


Am Mi, 1. Mär 2023, um 15:50, schrieb John Clark:
> In Alan Turing's original paper in which he propose the thing that we now 
> call the Turing Test he gave this example of the sort of thing we might ask a 
> machine that claims to be intelligent and the answer we might expect to 
> receive:
> 
> Question: Please write me a sonnet on the subject of the Forth Bridge.
> 
> Answer: **Count me out on this one. I never could write poetry.**
> 
> **So I decided to give the the same order to ChatGPT and less than five 
> seconds it did so. **
> 
> Question: Please write me a sonnet on the subject of the Forth Bridge.
> 
> **Answer:  Amidst the Scottish skies and bracing breeze,
> A structure stands with towering grace and might,
> A symbol of Victorian expertise,
> A masterpiece of steel, a wondrous sight.**
> **
> The Forth Bridge, a marvel of its time,
>  A cantilevered railway, bold and strong, 
> Its spans reach far, a feat of man's design, 
> Defying nature's forces all day long.
> 
> For over a century it has stood, 
> A steadfast sentinel of the North Sea,
> Its steel beams gleaming bright in sun and flood,
> A tribute to the skill of industry.
> 
> Oh, Forth Bridge, a wonder of the age, 
> A testament to man's unyielding rage.**
> 
> I'm the world's worst poet so I'm not a good judge of the above's artistic 
> value but I'm certain it's better than what Alan Turing would have expected a 
> machine that claims to be intelligent would say. Now if it could just do 
> arithmetic as well as a two dollar calculator!

It has long been observed in the field of AI that often things that look hard 
end up being the easy ones, and things that look easy turn out to be 
surprisingly hard.

We now have amazing statistical AI (current stuff) and amazing symbolic AI 
(chess engines, etc), but we do not really know how to combine the two. I tend 
to connect this to the ideas of Daniel Kahneman ("Thinking, fast and slow"). I 
think that statistical (roughly) corresponds to the fast mode of thinking and 
symbolic to the slow mode. Or maybe this is a bit simplistic and silly, but the 
important thing is that we humans can combine and switch between the two modes 
of cognition, and I think this is crucial for AGI.

Telmo

> John K ClarkSee what's on my new list at  Extropolis 
> <https://groups.google.com/g/extropolis>
> 56w
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/CAJPayv0s3AEXh7O%3DcUGz%3D8cMGbnAZ6xYhFCaHhYyyvnFtiRo3g%40mail.gmail.com
>  
> <https://groups.google.com/d/msgid/everything-list/CAJPayv0s3AEXh7O%3DcUGz%3D8cMGbnAZ6xYhFCaHhYyyvnFtiRo3g%40mail.gmail.com?utm_medium=email_source=footer>.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/9534490d-679b-4f6e-82d7-ee87fccb6152%40app.fastmail.com.


Re: When did Trump have his last negative test and first positive one?

2020-10-05 Thread Lawrence Crowell
[image: trump the drunk driver.jpg]

On Monday, October 5, 2020 at 7:38:18 AM UTC-5 johnk...@gmail.com wrote:

> Trump claims that he is tested for COVID-19 every day and he only found 
> out he had COVID-19 on Friday at 1 AM, but that is almost certainly another 
> one of Trump's many many lies. Virtually nobody tests negative for COVID-19 
> and 24 hours later test positive, and 12 hours after that become so sick he 
> has to be medevaced to a hospital. And doctors refused to say, no doubt on 
> Trump's orders, when the president last had a negative test or when he 
> first had a positive one. I would not be one bit surprised if Trump already 
> knew he had COVID-19 during Tuesday's debate, or at least he would've known 
> if he had actually taken a PCR test immediately beforehand as he claims to 
> have done. Even before this it was the most disgraceful Presidential debate 
> in American history, but this increases the shame exponentially!
>
>  John K Clark
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/ff330bac-cd61-46c0-bd4e-b35ce45ce2e5n%40googlegroups.com.


When did Trump have his last negative test and first positive one?

2020-10-05 Thread John Clark
Trump claims that he is tested for COVID-19 every day and he only found out
he had COVID-19 on Friday at 1 AM, but that is almost certainly another one
of Trump's many many lies. Virtually nobody tests negative for COVID-19 and
24 hours later test positive, and 12 hours after that become so sick he has
to be medevaced to a hospital. And doctors refused to say, no doubt on
Trump's orders, when the president last had a negative test or when he
first had a positive one. I would not be one bit surprised if Trump already
knew he had COVID-19 during Tuesday's debate, or at least he would've known
if he had actually taken a PCR test immediately beforehand as he claims to
have done. Even before this it was the most disgraceful Presidential debate
in American history, but this increases the shame exponentially!

 John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv03ZWDe3ht%3DsDR-G%3DOTq-ZCFZUJCWurU7zF9oKy_-Pv_A%40mail.gmail.com.


Re: Ex Machina, the film, about the Turing Test

2020-09-07 Thread Tomasz Rola
On Sat, Aug 15, 2020 at 02:23:52PM -0700, Alan Grayson wrote:
> If you haven't viewed it, please do so. It's about the Turing Test, science 
> fiction, but the "special effects" aren't primarily photographic bells and 
> whistles, but the dialogue. the text, the logic of the script. Recently, we 
> have argued about consciousness, what it is, and how we can test for it in 
> the context of AI. I claimed that we could do some superficial surgery to 
> determine whether the subject of the test was a robot or a conscious 
> entity. But this is completely mistaken. All that that would reveal is 
> whether the subject was artificial, not whether it was "conscious". The 
> subject could have been a black box, and still showing signs of what we 
> can't really define; consciousness. I think Ex Machina provides an answer 
> of what we need to look for. Please view it and report back. But do NOT 
> read the plot, say in Wiki. It's a spoiler. AG

My loose thoughts, in no specific order:

1. The ultimate test of AI is when it does not play your
game. Actually, very same can be said about humans.

2. Are there any readers of Stanislaw Lem here (besides me)? I
consider Lem a philosopher, and a very conscious and contemporary one
(he was more interested in a world around him and the future path of
humanity rather than in subjects like whether the world or humanity
exists), whereas I am afraid majority of public will have opinions
based on poorly done cinematisation of some books he wrote. In his
works, he gives, among other things, a gallery of automatons, whose
actions are erratic in various ways. A protagonist usually shrugs it
off as "requires repair or replacement", but sometimes he is not so
sure. The errors become quite specific, suggesting underlying will and
goal. There is a thin border line, a level of complication of electric
brain, after which doubts start to appear.

The appearance of real AI is burried in a whirl of human activity,
always hurring somewhere, get home early, go to sleep, go to work, go
see fiancee, drive children to school... Nobody will realize when such
moment happens. Only in retrospect there may be speculation - "it
happened during project M going on in skunk basement under the X-1
building... probably". Or during night watch of some lonesome
programmer at his home/villa/castle.

The movie "Ex machina" has more of Lem in itself than half of the
movies "based on him" that I watched (the other half was great, but
none of it were done on big budget, thus probably unknown beyound a
circle of enthusiasts).

3. Who is going to judge consciousness of the black box? Humans are
not equal. Not everybody is great athlete. In my opinion, not
everybody is conscious to the same degree. Some folks I hear about
(maybe meet with) are on the level of legendary talking animals. I am
willing to believe that with some effort, they could upend themselves
a bit, just like everybody could become better athlete with some
patience (as long as he can steer a single muscle).

So who is going to judge AI? How do we choose the judge(s)? From my
observations, a good number of potential choices will still be naive
and unable to overcome their biases.

Who is going to judge consciousness of a tenured professor? Or a
politician? Should we test voters? If yes, how exactly? What to do
with those who fail? I guess, give them more blockbusters and
entertainment... 

-- 
Regards,
Tomasz Rola

--
** A C programmer asked whether computer had Buddha's nature.  **
** As the answer, master did "rm -rif" on the programmer's home**
** directory. And then the C programmer became enlightened...  **
** **
** Tomasz Rola  mailto:tomasz_r...@bigfoot.com **

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/20200907183826.GA14902%40tau1.ceti.pl.


Re: Ex Machina, the film, about the Turing Test

2020-08-22 Thread Alan Grayson


On Thursday, August 20, 2020 at 2:29:10 AM UTC-6, Bruno Marchal wrote:
>
>
> On 20 Aug 2020, at 06:36, Alan Grayson > 
> wrote:
>
>
>
> On Tuesday, August 18, 2020 at 8:39:27 AM UTC-6, Alan Grayson wrote:
>>
>>
>>
>> On Tuesday, August 18, 2020 at 3:14:10 AM UTC-6, Bruno Marchal wrote:
>>>
>>>
>>> On 18 Aug 2020, at 08:07, Alan Grayson  wrote:
>>>
>>>
>>>> Concerning the Turing test, It makes no theoretical sense, but it can 
>>>> make some local practical sense.
>>>>
>>>
>>> I am not conversant with your system or definitions. All I am asserting 
>>> is that we believe we are conscious, or shall we say we have a property 
>>> called "consciousness", but are unable to define exactly (or even 
>>> approximately?) what it is.
>>>
>>>
>>> OK. But we can make approximation, like “knowledge” or “knowledge of 
>>> oneself”. Knowledge can be defined axiomatically, but the personal 
>>> knowledge attached to consciousness, or even just the knowledge of a 
>>> machine or of any third-person definable entity is not definable by that 
>>> entity, without invoking a Truth operator (itself not definable by that 
>>> entity, by Traski theorem).
>>>
>>>
>>>
>>> Hence, if we encounter a humanoid-type robot that can verbally respond 
>>> or interact with us, even in the form of a black box, we lack any criteria 
>>> for determining IF it is conscious. But the movie offers a hint of what to 
>>> look for. At first I thought it was the concept of time, but that's wrong. 
>>> The entity in question could have been supplied with a clock and thus could 
>>> respond as if the concept of time exists; present, past and future. But 
>>> that's insufficient to qualify for being "conscious". You should view the 
>>> movie and see what it suggests to you. AG
>>>
>>>
>>>
>>> Consciousness attribution is always a sort of projection of oneself to 
>>> the other(s). We attribute consciousness to an entity when we can recognise 
>>> oneself in that entity. We could be wrong, of course, like a kid 
>>> attributing a soul to its Teddy Bear.
>>>
>>> You might tell what the movie suggests. (Just put a spoiler alert in the 
>>> title of the post)
>>>
>>
>> I didn't do that to test the level of interest. AG 
>>
>>>
>>> Bruno
>>>
>>
> The entity being tested for "consciousness" must *want *something which 
> can only be achieved by a sequence of actions that achieve that result. AG 
>
>
>
> I usually distinguish a notion of rough, basic, consciousness, and of 
> reflexive consciousness. They obey the same characterisation (true, 
> knowable, non provable, non definable-without-mentioning “true”), but the 
> reflexive consciousness (brought just by adding the induction axioms, 
> technically) adds “indubitable”, and is more or less what Descartes talked 
> about in his Meditations. 
>
> In theory, we cannot test consciousness. In practice, there is no problem 
> for the people in which we can recognise one-self, in normal conditions. We 
> just project our own consciousness onto them.
>
> Now, problems arise in non normal condition, like 1) with some comatose 
> people (a woman,n was thought being unconscious for a comatose period 
> lasting 50 years, then she “woke up” and told people that she has been 
> conscious all the time., or 2) with “alien” (like when the catholics were 
> debating if the “Indians” have a soul, etc.
>
> To be sure, we cannot test the existence of consciousness of some other, 
> but we can’t really test the existence of the body either, except as a 
> local accessible plausible physical reality. But this takes into account 
> the “anti-materialist” consequences of Mechanism; the physical reality is 
> an emerging lawful hallucination by numbers or “numbers” (Turing equivalent 
> to natural number + addition/multiplication). This comes from the fact that 
> all computations are emulated (in the precise mathematical sense of 
> Church-Turing-Post-Kleene) in the arithmetical reality, as logicians know 
> since Gödel, Kleene, … (unfortunately it seems only logicians know that and 
> they tend to live in a Ivory Tower…).
>
> Bruno
>

I don't think you appreciate my point. In the movie, the entity being 
tested wants freedom, without presumably or ostensibly being programmed to 
want it.  It then takes successful steps to achieve that objective. I would 
conclude it is "conscious". What would

Re: Ex Machina, the film, about the Turing Test

2020-08-20 Thread Bruno Marchal
 is our thinking and emotions. This is where our 
>>>>> repository of knowledge is located. All the human knowledge of 
>>>>> philosophy, science, technology, arts belongs to the sixth cognition. 
>>>>> This includes  mental consciousness, sub-consciousness and these related 
>>>>> concepts. 
>>>> 
>>>> Looks like []p, intellect, mind, ...
>>>> 
>>>> 
>>>>> 
>>>>> Then the seventh and eighth senses are uniquely oriental and are near the 
>>>>> core theory of Buddhism. The eighth sense is easier to explain than the 
>>>>> seventh. 
>>>>> 
>>>>> The eighth sense is the “real-self”, or an inaccurate equivalent of 
>>>>> “soul”, that’s the constant between incarnations, regardless one’s life 
>>>>> takes the form of this human or that life being on earth.
>>>> 
>>>> That becomes close to the simplest mode (conceptually), but that the 
>>>> machine can not defined the “p” mode. That’s Plotinus One :)
>>>> 
>>>> But it might be []p & p (it depends of before or after illumination (“p 
>>>> <-> []p”).
>>>> 
>>>> You need to take this with some grain of salt (but not the whole salt 
>>>> shaker!).
>>>> 
>>>> 
>>>>> 
>>>>> The seven sense can be roughly said as something between the sixth 
>>>>> cognition - our day-to-day thinking - and the eighth cognition. The 
>>>>> seventh cognition enables us to perform all the deep and thorough 
>>>>> thinking at the sixth cognition level. 
>>>> 
>>>> Maybe []p & <>t? It is unclear.
>>>> 
>>>> 
>>>>> 
>>>>> Therefore, according to Buddhism teachings, AI may venture into the realm 
>>>>> of the mental consciousness, but will never be able to reach the seventh 
>>>>> and eighth cognition. 
>>>> 
>>>> Really?
>>>> 
>>>> 
>>>> 
>>>>> And that seventh and eight cognition is what distinguishes a life being 
>>>>> from a non life being. 
>>>> 
>>>> The universal machine is born illuminated, in the eight cognition mode, p. 
>>>> But then attached itself to some universal body/representation/number []p, 
>>>> and the laws of arithmetic are such that this one put an infinite mess in 
>>>> Arithmetic, and that is nothing compared to the mess when they met and 
>>>> multiply. 
>>>> 
>>>> I explain elsewhere why, if we assume the minimal amount of mechanism 
>>>> (Descartes) to make sense of Darwin, and a notion of finite information, + 
>>>> the Church Turing thesis, the mind body problem becomes a problem of 
>>>> reducing the “hallucination of matter” from the logic of self-reference. 
>>>> It works. It is not well known, and it is normal, given that behind the 
>>>> modes, you have the mood, and the mood today is almost everywhere 
>>>> Aristotelian, with few exception, since about 1500 years in Occident, 800 
>>>> years in the Middle-East, and it is more complex to figure out in India 
>>>> and China. 
>>>> 
>>>> Thanks to Gödel and the logicians, the theology of machine is taught to 
>>>> logicians, but, not many realise, or appreciate, that as a platonic 
>>>> Pythagorean theory, it is testable (as it contains physics, or its 
>>>> invariant (for all universal machine) core).
>>>> 
>>>> If interested I give you references, including good books on the 
>>>> mathematics of self-reference. The mode “[]p & p” is a standard variant of 
>>>> []p in the literature.
>>>> 
>>>> Bruno
>>>> 
>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> Sent from my iPad
>>>>> 
>>>>>> On Aug 16, 2020, at 5:02 AM, Alan Grayson >>>>> <mailto:agrayson2...@gmail.com>> wrote:
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> On Saturday, August 15, 2020 at 10:36:40 PM UTC-6, Beixiao Robert Liu 
>>>>>> wrote:
>>>>>> Well, if the inquiry here concerns consciousness, then the question of 
>>>>>> whether a human should trust the AI, or whether the AI has the capacity 
>>>&

Re: Ex Machina, the film, about the Turing Test

2020-08-20 Thread Bruno Marchal

> On 20 Aug 2020, at 06:36, Alan Grayson  wrote:
> 
> 
> 
> On Tuesday, August 18, 2020 at 8:39:27 AM UTC-6, Alan Grayson wrote:
> 
> 
> On Tuesday, August 18, 2020 at 3:14:10 AM UTC-6, Bruno Marchal wrote:
> 
>> On 18 Aug 2020, at 08:07, Alan Grayson > wrote:
>> 
>> 
>> Concerning the Turing test, It makes no theoretical sense, but it can make 
>> some local practical sense.
>> 
>> I am not conversant with your system or definitions. All I am asserting is 
>> that we believe we are conscious, or shall we say we have a property called 
>> "consciousness", but are unable to define exactly (or even approximately?) 
>> what it is.
> 
> OK. But we can make approximation, like “knowledge” or “knowledge of 
> oneself”. Knowledge can be defined axiomatically, but the personal knowledge 
> attached to consciousness, or even just the knowledge of a machine or of any 
> third-person definable entity is not definable by that entity, without 
> invoking a Truth operator (itself not definable by that entity, by Traski 
> theorem).
> 
> 
> 
>> Hence, if we encounter a humanoid-type robot that can verbally respond or 
>> interact with us, even in the form of a black box, we lack any criteria for 
>> determining IF it is conscious. But the movie offers a hint of what to look 
>> for. At first I thought it was the concept of time, but that's wrong. The 
>> entity in question could have been supplied with a clock and thus could 
>> respond as if the concept of time exists; present, past and future. But 
>> that's insufficient to qualify for being "conscious". You should view the 
>> movie and see what it suggests to you. AG
> 
> 
> Consciousness attribution is always a sort of projection of oneself to the 
> other(s). We attribute consciousness to an entity when we can recognise 
> oneself in that entity. We could be wrong, of course, like a kid attributing 
> a soul to its Teddy Bear.
> 
> You might tell what the movie suggests. (Just put a spoiler alert in the 
> title of the post)
> 
> I didn't do that to test the level of interest. AG 
> 
> Bruno
> 
> The entity being tested for "consciousness" must want something which can 
> only be achieved by a sequence of actions that achieve that result. AG 


I usually distinguish a notion of rough, basic, consciousness, and of reflexive 
consciousness. They obey the same characterisation (true, knowable, non 
provable, non definable-without-mentioning “true”), but the reflexive 
consciousness (brought just by adding the induction axioms, technically) adds 
“indubitable”, and is more or less what Descartes talked about in his 
Meditations. 

In theory, we cannot test consciousness. In practice, there is no problem for 
the people in which we can recognise one-self, in normal conditions. We just 
project our own consciousness onto them.

Now, problems arise in non normal condition, like 1) with some comatose people 
(a woman,n was thought being unconscious for a comatose period lasting 50 
years, then she “woke up” and told people that she has been conscious all the 
time., or 2) with “alien” (like when the catholics were debating if the 
“Indians” have a soul, etc.

To be sure, we cannot test the existence of consciousness of some other, but we 
can’t really test the existence of the body either, except as a local 
accessible plausible physical reality. But this takes into account the 
“anti-materialist” consequences of Mechanism; the physical reality is an 
emerging lawful hallucination by numbers or “numbers” (Turing equivalent to 
natural number + addition/multiplication). This comes from the fact that all 
computations are emulated (in the precise mathematical sense of 
Church-Turing-Post-Kleene) in the arithmetical reality, as logicians know since 
Gödel, Kleene, … (unfortunately it seems only logicians know that and they tend 
to live in a Ivory Tower…).

Bruno





> 
> 
> 
>> 
>> 
>>> I think Ex Machina provides an answer of what we need to look for. Please 
>>> view it and report back. But do NOT read the plot, say in Wiki. It's a 
>>> spoiler. AG
>> 
>> Can we find it on Youtube or similar?
>> 
>> Bruno
>> 
>> 
>> 
>> 
>>> 
>>> 
>>> -- 
>>> You received this message because you are subscribed to the Google Groups 
>>> "Everything List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an 
>>> email to everyth...@googlegroups.com <>.
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/everything-list/6c3c8eba-542c-45a6-a79f-ca54202fdcc8o%40googlegroups.com
>>

Re: Ex Machina, the film, about the Turing Test

2020-08-19 Thread Alan Grayson


On Tuesday, August 18, 2020 at 8:39:27 AM UTC-6, Alan Grayson wrote:
>
>
>
> On Tuesday, August 18, 2020 at 3:14:10 AM UTC-6, Bruno Marchal wrote:
>>
>>
>> On 18 Aug 2020, at 08:07, Alan Grayson  wrote:
>>
>>
>>> Concerning the Turing test, It makes no theoretical sense, but it can 
>>> make some local practical sense.
>>>
>>
>> I am not conversant with your system or definitions. All I am asserting 
>> is that we believe we are conscious, or shall we say we have a property 
>> called "consciousness", but are unable to define exactly (or even 
>> approximately?) what it is.
>>
>>
>> OK. But we can make approximation, like “knowledge” or “knowledge of 
>> oneself”. Knowledge can be defined axiomatically, but the personal 
>> knowledge attached to consciousness, or even just the knowledge of a 
>> machine or of any third-person definable entity is not definable by that 
>> entity, without invoking a Truth operator (itself not definable by that 
>> entity, by Traski theorem).
>>
>>
>>
>> Hence, if we encounter a humanoid-type robot that can verbally respond or 
>> interact with us, even in the form of a black box, we lack any criteria for 
>> determining IF it is conscious. But the movie offers a hint of what to look 
>> for. At first I thought it was the concept of time, but that's wrong. The 
>> entity in question could have been supplied with a clock and thus could 
>> respond as if the concept of time exists; present, past and future. But 
>> that's insufficient to qualify for being "conscious". You should view the 
>> movie and see what it suggests to you. AG
>>
>>
>>
>> Consciousness attribution is always a sort of projection of oneself to 
>> the other(s). We attribute consciousness to an entity when we can recognise 
>> oneself in that entity. We could be wrong, of course, like a kid 
>> attributing a soul to its Teddy Bear.
>>
>> You might tell what the movie suggests. (Just put a spoiler alert in the 
>> title of the post)
>>
>
> I didn't do that to test the level of interest. AG 
>
>>
>> Bruno
>>
>
The entity being tested for "consciousness" must *want *something which can 
only be achieved by a sequence of actions that achieve that result. AG 

>
>>
>>
>>
>>>
>>> I think Ex Machina provides an answer of what we need to look for. 
>>> Please view it and report back. But do NOT read the plot, say in Wiki. It's 
>>> a spoiler. AG
>>>
>>>
>>> Can we find it on Youtube or similar?
>>>
>>> Bruno
>>>
>>>
>>>
>>>
>>>
>>>
>>> -- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "Everything List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to everyth...@googlegroups.com.
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/everything-list/6c3c8eba-542c-45a6-a79f-ca54202fdcc8o%40googlegroups.com
>>>  
>>> <https://groups.google.com/d/msgid/everything-list/6c3c8eba-542c-45a6-a79f-ca54202fdcc8o%40googlegroups.com?utm_medium=email_source=footer>
>>> .
>>>
>>>
>>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to everyth...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/everything-list/1bcf344b-250f-4ced-a69b-57d14d1c46cdo%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/everything-list/1bcf344b-250f-4ced-a69b-57d14d1c46cdo%40googlegroups.com?utm_medium=email_source=footer>
>> .
>>
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/fdce08ea-8a9b-4831-96ed-9bcc40ed80cao%40googlegroups.com.


Re: Ex Machina, the film, about the Turing Test

2020-08-19 Thread Beixiao Robert Liu
d eight cognition is what distinguishes a life being 
>>>> from a non life being. 
>>> 
>>> The universal machine is born illuminated, in the eight cognition mode, p. 
>>> But then attached itself to some universal body/representation/number []p, 
>>> and the laws of arithmetic are such that this one put an infinite mess in 
>>> Arithmetic, and that is nothing compared to the mess when they met and 
>>> multiply. 
>>> 
>>> I explain elsewhere why, if we assume the minimal amount of mechanism 
>>> (Descartes) to make sense of Darwin, and a notion of finite information, + 
>>> the Church Turing thesis, the mind body problem becomes a problem of 
>>> reducing the “hallucination of matter” from the logic of self-reference. It 
>>> works. It is not well known, and it is normal, given that behind the modes, 
>>> you have the mood, and the mood today is almost everywhere Aristotelian, 
>>> with few exception, since about 1500 years in Occident, 800 years in the 
>>> Middle-East, and it is more complex to figure out in India and China. 
>>> 
>>> Thanks to Gödel and the logicians, the theology of machine is taught to 
>>> logicians, but, not many realise, or appreciate, that as a platonic 
>>> Pythagorean theory, it is testable (as it contains physics, or its 
>>> invariant (for all universal machine) core).
>>> 
>>> If interested I give you references, including good books on the 
>>> mathematics of self-reference. The mode “[]p & p” is a standard variant of 
>>> []p in the literature.
>>> 
>>> Bruno
>>> 
>>> 
>>>> 
>>>> 
>>>> 
>>>> Sent from my iPad
>>>> 
>>>>>> On Aug 16, 2020, at 5:02 AM, Alan Grayson  wrote:
>>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>>> On Saturday, August 15, 2020 at 10:36:40 PM UTC-6, Beixiao Robert Liu 
>>>>>> wrote:
>>>>>> Well, if the inquiry here concerns consciousness, then the question of 
>>>>>> whether a human should trust the AI, or whether the AI has the capacity 
>>>>>> to be trusted by a human, ought to be part of the inquiry, right? The 
>>>>>> movie listed others elements: compassion, sympathy, etc. I guess I was 
>>>>>> just using my wife’s off-the-cuff comment as a convenient way to suggest 
>>>>>> that Caleb might set the wrong threshold in his Turing test; and as a 
>>>>>> result, he gave a pass to the AI too easily, which later proved fatally 
>>>>>> wrong. 
>>>>> 
>>>>> Sorry. Maybe my comment was too flippant. I just don't think Caleb's 
>>>>> mistake in trusting the AI relates to whether the AI is conscious. AG 
>>>>>> 
>>>>>>>> On Aug 15, 2020, at 23:29, Alan Grayson  wrote:
>>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>>> On Saturday, August 15, 2020 at 7:54:35 PM UTC-6, Beixiao Robert Liu 
>>>>>>>> wrote:
>>>>>>>> It’s available on YouTube. You could rent it for as little as $4, as 
>>>>>>>> long as you finish viewing it within 2 days once your start playing. 
>>>>>>>> 
>>>>>>>> Thanks for recommending it. I just viewed it. My wife said that it’s 
>>>>>>>> stupid for the boy to trust the AI girl, Ava, so he got what he 
>>>>>>>> deserved. 
>>>>>>> 
>>>>>>> Too funny! That's hardly the point. AG 
>>>>>>>> 
>>>>>>>> 
>>>>>>>> Sent from my iPhone 
>>>>>>>> 
>>>>>>>> > On Aug 15, 2020, at 21:26, Russell Standish  
>>>>>>>> > wrote: 
>>>>>>>> > 
>>>>>>>> > Alas not available on Netflix Australia. It'll have to wait until 
>>>>>>>> > if/when I subscribe to Amazon Prime briefly. 
>>>>>>>> > 
>>>>>>>> > I'm also not really prepared to purchase a VPN just to watch other 
>>>>>>>> > countries' Netflix connections, for much the same reason as I only 
>>>>>>>> > subscribe to one streamer - so it might have to wait until if/when I 
>>>>>>>> > do live in a cou

Re: Ex Machina, the film, about the Turing Test

2020-08-19 Thread Bruno Marchal
& p) in two, ((corresponding to “provable” versus 
>> “true”, making eight "modes of cognition”, or eight ways arithmetic can see 
>> itself through universal numbers).
>> 
>> The logic of the modes with the occurence of “& p”, gives first person 
>> modes, and describes entities which cannot be defined in any third person 
>> description, something inherited by the qualia, consciousness, etc. 
>> technically, they entail that the subject obeys some intuitionistic logic.
>> 
>> The logic of the modes with “& <>t” gives the physical modes, and implies a 
>> quantum logic and some measure, corresponding to the machine’s ignorance on 
>> which computations support her (among an infinity). Recently,  I realised 
>> that the existence of this measure exists and can be proved in ZF + some 
>> sufficiently large cardinal.
>> 
>> The “& p” makes things non definable.
>> 
>> The “<>t” makes things non provable, which allows the study of the negation 
>> of those modes, and things get subtle and counter-intuitive.
>> 
>> 
>> 
>>> The first five are related to our physical world: sight, hearing, smell, 
>>> taste, touch. 
>> 
>> Of course, those should be obtained by the particularisation of some of the 
>> modes above, if we want to make this coherent with some school go Buddhism. 
>> The Hinayana, the Mahayana, the tantric, zen have many school, and variants. 
>> Some Buddhists have develop school on logics. It is rather complex. The 
>> Plato/Aristotle divide divides also Buddhism. 
>> 
>> 
>>> 
>>> Then there are three related to our spiritual world. 
>>> 
>>> The sixth cognition is our thinking and emotions. This is where our 
>>> repository of knowledge is located. All the human knowledge of philosophy, 
>>> science, technology, arts belongs to the sixth cognition. This includes  
>>> mental consciousness, sub-consciousness and these related concepts. 
>> 
>> Looks like []p, intellect, mind, ...
>> 
>> 
>>> 
>>> Then the seventh and eighth senses are uniquely oriental and are near the 
>>> core theory of Buddhism. The eighth sense is easier to explain than the 
>>> seventh. 
>>> 
>>> The eighth sense is the “real-self”, or an inaccurate equivalent of “soul”, 
>>> that’s the constant between incarnations, regardless one’s life takes the 
>>> form of this human or that life being on earth.
>> 
>> That becomes close to the simplest mode (conceptually), but that the machine 
>> can not defined the “p” mode. That’s Plotinus One :)
>> 
>> But it might be []p & p (it depends of before or after illumination (“p <-> 
>> []p”).
>> 
>> You need to take this with some grain of salt (but not the whole salt 
>> shaker!).
>> 
>> 
>>> 
>>> The seven sense can be roughly said as something between the sixth 
>>> cognition - our day-to-day thinking - and the eighth cognition. The seventh 
>>> cognition enables us to perform all the deep and thorough thinking at the 
>>> sixth cognition level. 
>> 
>> Maybe []p & <>t? It is unclear.
>> 
>> 
>>> 
>>> Therefore, according to Buddhism teachings, AI may venture into the realm 
>>> of the mental consciousness, but will never be able to reach the seventh 
>>> and eighth cognition. 
>> 
>> Really?
>> 
>> 
>> 
>>> And that seventh and eight cognition is what distinguishes a life being 
>>> from a non life being. 
>> 
>> The universal machine is born illuminated, in the eight cognition mode, p. 
>> But then attached itself to some universal body/representation/number []p, 
>> and the laws of arithmetic are such that this one put an infinite mess in 
>> Arithmetic, and that is nothing compared to the mess when they met and 
>> multiply. 
>> 
>> I explain elsewhere why, if we assume the minimal amount of mechanism 
>> (Descartes) to make sense of Darwin, and a notion of finite information, + 
>> the Church Turing thesis, the mind body problem becomes a problem of 
>> reducing the “hallucination of matter” from the logic of self-reference. It 
>> works. It is not well known, and it is normal, given that behind the modes, 
>> you have the mood, and the mood today is almost everywhere Aristotelian, 
>> with few exception, since about 1500 years in Occident, 800 years in the 
>> Middle-East, and it is more complex to figure out in India and China. 
>> 
>> Thanks to Gödel and the logicians,

Re: Ex Machina, the film, about the Turing Test

2020-08-18 Thread Beixiao Liu
n of salt (but not the whole salt 
> shaker!).
> 
> 
>> 
>> The seven sense can be roughly said as something between the sixth cognition 
>> - our day-to-day thinking - and the eighth cognition. The seventh cognition 
>> enables us to perform all the deep and thorough thinking at the sixth 
>> cognition level. 
> 
> Maybe []p & <>t? It is unclear.
> 
> 
>> 
>> Therefore, according to Buddhism teachings, AI may venture into the realm of 
>> the mental consciousness, but will never be able to reach the seventh and 
>> eighth cognition. 
> 
> Really?
> 
> 
> 
>> And that seventh and eight cognition is what distinguishes a life being from 
>> a non life being. 
> 
> The universal machine is born illuminated, in the eight cognition mode, p. 
> But then attached itself to some universal body/representation/number []p, 
> and the laws of arithmetic are such that this one put an infinite mess in 
> Arithmetic, and that is nothing compared to the mess when they met and 
> multiply. 
> 
> I explain elsewhere why, if we assume the minimal amount of mechanism 
> (Descartes) to make sense of Darwin, and a notion of finite information, + 
> the Church Turing thesis, the mind body problem becomes a problem of reducing 
> the “hallucination of matter” from the logic of self-reference. It works. It 
> is not well known, and it is normal, given that behind the modes, you have 
> the mood, and the mood today is almost everywhere Aristotelian, with few 
> exception, since about 1500 years in Occident, 800 years in the Middle-East, 
> and it is more complex to figure out in India and China. 
> 
> Thanks to Gödel and the logicians, the theology of machine is taught to 
> logicians, but, not many realise, or appreciate, that as a platonic 
> Pythagorean theory, it is testable (as it contains physics, or its invariant 
> (for all universal machine) core).
> 
> If interested I give you references, including good books on the mathematics 
> of self-reference. The mode “[]p & p” is a standard variant of []p in the 
> literature.
> 
> Bruno
> 
> 
>> 
>> 
>> 
>> Sent from my iPad
>> 
>>> On Aug 16, 2020, at 5:02 AM, Alan Grayson >> <mailto:agrayson2...@gmail.com>> wrote:
>>> 
>>> 
>>> 
>>> 
>>> On Saturday, August 15, 2020 at 10:36:40 PM UTC-6, Beixiao Robert Liu wrote:
>>> Well, if the inquiry here concerns consciousness, then the question of 
>>> whether a human should trust the AI, or whether the AI has the capacity to 
>>> be trusted by a human, ought to be part of the inquiry, right? The movie 
>>> listed others elements: compassion, sympathy, etc. I guess I was just using 
>>> my wife’s off-the-cuff comment as a convenient way to suggest that Caleb 
>>> might set the wrong threshold in his Turing test; and as a result, he gave 
>>> a pass to the AI too easily, which later proved fatally wrong. 
>>> 
>>> Sorry. Maybe my comment was too flippant. I just don't think Caleb's 
>>> mistake in trusting the AI relates to whether the AI is conscious. AG 
>>> 
>>>> On Aug 15, 2020, at 23:29, Alan Grayson > wrote:
>>>> 
>>>> 
>>>> 
>>>> 
>>>> On Saturday, August 15, 2020 at 7:54:35 PM UTC-6, Beixiao Robert Liu wrote:
>>>> It’s available on YouTube. You could rent it for as little as $4, as long 
>>>> as you finish viewing it within 2 days once your start playing. 
>>>> 
>>>> Thanks for recommending it. I just viewed it. My wife said that it’s 
>>>> stupid for the boy to trust the AI girl, Ava, so he got what he deserved. 
>>>> 
>>>> Too funny! That's hardly the point. AG 
>>>> 
>>>> 
>>>> Sent from my iPhone 
>>>> 
>>>> > On Aug 15, 2020, at 21:26, Russell Standish > 
>>>> > wrote: 
>>>> > 
>>>> > Alas not available on Netflix Australia. It'll have to wait until 
>>>> > if/when I subscribe to Amazon Prime briefly. 
>>>> > 
>>>> > I'm also not really prepared to purchase a VPN just to watch other 
>>>> > countries' Netflix connections, for much the same reason as I only 
>>>> > subscribe to one streamer - so it might have to wait until if/when I 
>>>> > do live in a country that has it in the Netflix catalogue. 
>>>> > 
>>>> > The tangled web of movie copyright arrangements... Bah! 
>>>> > 
>>>> >> On Sat, Aug 15, 2020 at 02:

Re: Ex Machina, the film, about the Turing Test

2020-08-18 Thread Bruno Marchal
[]p, and the 
laws of arithmetic are such that this one put an infinite mess in Arithmetic, 
and that is nothing compared to the mess when they met and multiply. 

I explain elsewhere why, if we assume the minimal amount of mechanism 
(Descartes) to make sense of Darwin, and a notion of finite information, + the 
Church Turing thesis, the mind body problem becomes a problem of reducing the 
“hallucination of matter” from the logic of self-reference. It works. It is not 
well known, and it is normal, given that behind the modes, you have the mood, 
and the mood today is almost everywhere Aristotelian, with few exception, since 
about 1500 years in Occident, 800 years in the Middle-East, and it is more 
complex to figure out in India and China. 

Thanks to Gödel and the logicians, the theology of machine is taught to 
logicians, but, not many realise, or appreciate, that as a platonic Pythagorean 
theory, it is testable (as it contains physics, or its invariant (for all 
universal machine) core).

If interested I give you references, including good books on the mathematics of 
self-reference. The mode “[]p & p” is a standard variant of []p in the 
literature.

Bruno


> 
> 
> 
> Sent from my iPad
> 
>> On Aug 16, 2020, at 5:02 AM, Alan Grayson  wrote:
>> 
>> 
>> 
>> 
>> On Saturday, August 15, 2020 at 10:36:40 PM UTC-6, Beixiao Robert Liu wrote:
>> Well, if the inquiry here concerns consciousness, then the question of 
>> whether a human should trust the AI, or whether the AI has the capacity to 
>> be trusted by a human, ought to be part of the inquiry, right? The movie 
>> listed others elements: compassion, sympathy, etc. I guess I was just using 
>> my wife’s off-the-cuff comment as a convenient way to suggest that Caleb 
>> might set the wrong threshold in his Turing test; and as a result, he gave a 
>> pass to the AI too easily, which later proved fatally wrong. 
>> 
>> Sorry. Maybe my comment was too flippant. I just don't think Caleb's mistake 
>> in trusting the AI relates to whether the AI is conscious. AG 
>> 
>>> On Aug 15, 2020, at 23:29, Alan Grayson > wrote:
>>> 
>>> 
>>> 
>>> 
>>> On Saturday, August 15, 2020 at 7:54:35 PM UTC-6, Beixiao Robert Liu wrote:
>>> It’s available on YouTube. You could rent it for as little as $4, as long 
>>> as you finish viewing it within 2 days once your start playing. 
>>> 
>>> Thanks for recommending it. I just viewed it. My wife said that it’s stupid 
>>> for the boy to trust the AI girl, Ava, so he got what he deserved.
>>> 
>>> Too funny! That's hardly the point. AG 
>>> 
>>> 
>>> Sent from my iPhone 
>>> 
>>> > On Aug 15, 2020, at 21:26, Russell Standish > 
>>> > wrote: 
>>> > 
>>> > Alas not available on Netflix Australia. It'll have to wait until 
>>> > if/when I subscribe to Amazon Prime briefly. 
>>> > 
>>> > I'm also not really prepared to purchase a VPN just to watch other 
>>> > countries' Netflix connections, for much the same reason as I only 
>>> > subscribe to one streamer - so it might have to wait until if/when I 
>>> > do live in a country that has it in the Netflix catalogue. 
>>> > 
>>> > The tangled web of movie copyright arrangements... Bah! 
>>> > 
>>> >> On Sat, Aug 15, 2020 at 02:23:52PM -0700, Alan Grayson wrote: 
>>> >> If you haven't viewed it, please do so. It's about the Turing Test, 
>>> >> science 
>>> >> fiction, but the "special effects" aren't primarily photographic bells 
>>> >> and 
>>> >> whistles, but the dialogue. the text, the logic of the script. Recently, 
>>> >> we 
>>> >> have argued about consciousness, what it is, and how we can test for it 
>>> >> in the 
>>> >> context of AI. I claimed that we could do some superficial surgery to 
>>> >> determine 
>>> >> whether the subject of the test was a robot or a conscious entity. But 
>>> >> this is 
>>> >> completely mistaken. All that that would reveal is whether the subject 
>>> >> was 
>>> >> artificial, not whether it was "conscious". The subject could have been 
>>> >> a black 
>>> >> box, and still showing signs of what we can't really define; 
>>> >> consciousness. I 
>>> >> think Ex Machina provides an answer of what we need to look for. Please 
>>&g

Re: Ex Machina, the film, about the Turing Test

2020-08-18 Thread Alan Grayson


On Tuesday, August 18, 2020 at 3:14:10 AM UTC-6, Bruno Marchal wrote:
>
>
> On 18 Aug 2020, at 08:07, Alan Grayson > 
> wrote:
>
>
>> Concerning the Turing test, It makes no theoretical sense, but it can 
>> make some local practical sense.
>>
>
> I am not conversant with your system or definitions. All I am asserting is 
> that we believe we are conscious, or shall we say we have a property called 
> "consciousness", but are unable to define exactly (or even approximately?) 
> what it is.
>
>
> OK. But we can make approximation, like “knowledge” or “knowledge of 
> oneself”. Knowledge can be defined axiomatically, but the personal 
> knowledge attached to consciousness, or even just the knowledge of a 
> machine or of any third-person definable entity is not definable by that 
> entity, without invoking a Truth operator (itself not definable by that 
> entity, by Traski theorem).
>
>
>
> Hence, if we encounter a humanoid-type robot that can verbally respond or 
> interact with us, even in the form of a black box, we lack any criteria for 
> determining IF it is conscious. But the movie offers a hint of what to look 
> for. At first I thought it was the concept of time, but that's wrong. The 
> entity in question could have been supplied with a clock and thus could 
> respond as if the concept of time exists; present, past and future. But 
> that's insufficient to qualify for being "conscious". You should view the 
> movie and see what it suggests to you. AG
>
>
>
> Consciousness attribution is always a sort of projection of oneself to the 
> other(s). We attribute consciousness to an entity when we can recognise 
> oneself in that entity. We could be wrong, of course, like a kid 
> attributing a soul to its Teddy Bear.
>
> You might tell what the movie suggests. (Just put a spoiler alert in the 
> title of the post)
>

I didn't do that to test the level of interest. AG 

>
> Bruno
>
>
>
>
>>
>> I think Ex Machina provides an answer of what we need to look for. Please 
>> view it and report back. But do NOT read the plot, say in Wiki. It's a 
>> spoiler. AG
>>
>>
>> Can we find it on Youtube or similar?
>>
>> Bruno
>>
>>
>>
>>
>>
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to everyth...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/everything-list/6c3c8eba-542c-45a6-a79f-ca54202fdcc8o%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/everything-list/6c3c8eba-542c-45a6-a79f-ca54202fdcc8o%40googlegroups.com?utm_medium=email_source=footer>
>> .
>>
>>
>>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everyth...@googlegroups.com .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/1bcf344b-250f-4ced-a69b-57d14d1c46cdo%40googlegroups.com
>  
> <https://groups.google.com/d/msgid/everything-list/1bcf344b-250f-4ced-a69b-57d14d1c46cdo%40googlegroups.com?utm_medium=email_source=footer>
> .
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/f37880e2-624c-44fe-9662-b7c20296b5f7o%40googlegroups.com.


Re: Ex Machina, the film, about the Turing Test

2020-08-18 Thread John Clark
On Sun, Aug 16, 2020 at 10:26 AM Beixiao Robert Liu 
wrote:

*> First of all, Buddha recognizes that ALL human beings have the capacity
> of those eight cognitions. *


Why does Buddha assume humans have all eight of those cognitions but
computers don't have any regardless of how brilliantly they behave? And if a
computer can outsmart Buddha why does it even need those eight cognitions?
They didn't seem to do the Buddha any good.


> * > Buddha has the ability to “see” characteristics of human beings that
> ordinary people can’t “see”. This ability to “see” is not limited to Buddha
> himself. Any enlightened person — what the term Buddha means — will possess
> such ability. *
>

Just as I feared, we've entered the realm of comic book science and
religious superstition.

> *In the 1950s and 60s, there was a competition between the US and the
> Soviet’s intelligence communities, to recruit and develop people who
> possess “supernatural” cognitive faculties, for example, the ability to see
> what’s inside a safe deposit box or what’s behind the walls.*
>

And that was not the first time Soviet or US taxpayers had their money
wasted on nonsense, nor would it be the last.

 John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv3JfB5QkbNyVbWMg%3DEYsGj2jP10D7kOaPgSet-OOPf5ng%40mail.gmail.com.


Re: Ex Machina, the film, about the Turing Test

2020-08-18 Thread Bruno Marchal

> On 18 Aug 2020, at 08:07, Alan Grayson  wrote:
> 
> 
> Concerning the Turing test, It makes no theoretical sense, but it can make 
> some local practical sense.
> 
> I am not conversant with your system or definitions. All I am asserting is 
> that we believe we are conscious, or shall we say we have a property called 
> "consciousness", but are unable to define exactly (or even approximately?) 
> what it is.

OK. But we can make approximation, like “knowledge” or “knowledge of oneself”. 
Knowledge can be defined axiomatically, but the personal knowledge attached to 
consciousness, or even just the knowledge of a machine or of any third-person 
definable entity is not definable by that entity, without invoking a Truth 
operator (itself not definable by that entity, by Traski theorem).



> Hence, if we encounter a humanoid-type robot that can verbally respond or 
> interact with us, even in the form of a black box, we lack any criteria for 
> determining IF it is conscious. But the movie offers a hint of what to look 
> for. At first I thought it was the concept of time, but that's wrong. The 
> entity in question could have been supplied with a clock and thus could 
> respond as if the concept of time exists; present, past and future. But 
> that's insufficient to qualify for being "conscious". You should view the 
> movie and see what it suggests to you. AG


Consciousness attribution is always a sort of projection of oneself to the 
other(s). We attribute consciousness to an entity when we can recognise oneself 
in that entity. We could be wrong, of course, like a kid attributing a soul to 
its Teddy Bear.

You might tell what the movie suggests. (Just put a spoiler alert in the title 
of the post)

Bruno



> 
> 
>> I think Ex Machina provides an answer of what we need to look for. Please 
>> view it and report back. But do NOT read the plot, say in Wiki. It's a 
>> spoiler. AG
> 
> Can we find it on Youtube or similar?
> 
> Bruno
> 
> 
> 
> 
>> 
>> 
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to everyth...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/everything-list/6c3c8eba-542c-45a6-a79f-ca54202fdcc8o%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/everything-list/6c3c8eba-542c-45a6-a79f-ca54202fdcc8o%40googlegroups.com?utm_medium=email_source=footer>.
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> <mailto:everything-list+unsubscr...@googlegroups.com>.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/1bcf344b-250f-4ced-a69b-57d14d1c46cdo%40googlegroups.com
>  
> <https://groups.google.com/d/msgid/everything-list/1bcf344b-250f-4ced-a69b-57d14d1c46cdo%40googlegroups.com?utm_medium=email_source=footer>.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/3BE11A1A-C5D3-4767-BB6A-7D1284A2D12C%40ulb.ac.be.


Re: Ex Machina, the film, about the Turing Test

2020-08-18 Thread Alan Grayson

>
>
> Concerning the Turing test, It makes no theoretical sense, but it can make 
> some local practical sense.
>

I am not conversant with your system or definitions. All I am asserting is 
that we believe we are conscious, or shall we say we have a property called 
"consciousness", but are unable to define exactly (or even approximately?) 
what it is. Hence, if we encounter a humanoid-type robot that can verbally 
respond or interact with us, even in the form of a black box, we lack any 
criteria for determining IF it is conscious. But the movie offers a hint of 
what to look for. At first I thought it was the concept of time, but that's 
wrong. The entity in question could have been supplied with a clock and 
thus could respond as if the concept of time exists; present, past and 
future. But that's insufficient to qualify for being "conscious". You 
should view the movie and see what it suggests to you. AG

>
>
> I think Ex Machina provides an answer of what we need to look for. Please 
> view it and report back. But do NOT read the plot, say in Wiki. It's a 
> spoiler. AG
>
>
> Can we find it on Youtube or similar?
>
> Bruno
>
>
>
>
>
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everyth...@googlegroups.com .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/6c3c8eba-542c-45a6-a79f-ca54202fdcc8o%40googlegroups.com
>  
> <https://groups.google.com/d/msgid/everything-list/6c3c8eba-542c-45a6-a79f-ca54202fdcc8o%40googlegroups.com?utm_medium=email_source=footer>
> .
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/1bcf344b-250f-4ced-a69b-57d14d1c46cdo%40googlegroups.com.


Re: Ex Machina, the film, about the Turing Test

2020-08-17 Thread Bruno Marchal


> On 16 Aug 2020, at 03:36, Beixiao Robert Liu  wrote:
> 
> It’s available on YouTube.

OK Thx. I found it, but only in paid version, which I usually avoid.



> You could rent it for as little as $4, as long as you finish viewing it 
> within 2 days once your start playing. 
> 
> Thanks for recommending it. I just viewed it. My wife said that it’s stupid 
> for the boy to trust the AI girl, Ava, so he got what he deserved. 

Oh…. Hmm…

---Well I guess I need to see the movie to be more precise here … :)

Bruno


> 
> Sent from my iPhone
> 
>> On Aug 15, 2020, at 21:26, Russell Standish  wrote:
>> 
>> Alas not available on Netflix Australia. It'll have to wait until
>> if/when I subscribe to Amazon Prime briefly.
>> 
>> I'm also not really prepared to purchase a VPN just to watch other
>> countries' Netflix connections, for much the same reason as I only
>> subscribe to one streamer - so it might have to wait until if/when I
>> do live in a country that has it in the Netflix catalogue.
>> 
>> The tangled web of movie copyright arrangements... Bah!
>> 
>>> On Sat, Aug 15, 2020 at 02:23:52PM -0700, Alan Grayson wrote:
>>> If you haven't viewed it, please do so. It's about the Turing Test, science
>>> fiction, but the "special effects" aren't primarily photographic bells and
>>> whistles, but the dialogue. the text, the logic of the script. Recently, we
>>> have argued about consciousness, what it is, and how we can test for it in 
>>> the
>>> context of AI. I claimed that we could do some superficial surgery to 
>>> determine
>>> whether the subject of the test was a robot or a conscious entity. But this 
>>> is
>>> completely mistaken. All that that would reveal is whether the subject was
>>> artificial, not whether it was "conscious". The subject could have been a 
>>> black
>>> box, and still showing signs of what we can't really define; consciousness. 
>>> I
>>> think Ex Machina provides an answer of what we need to look for. Please 
>>> view it
>>> and report back. But do NOT read the plot, say in Wiki. It's a spoiler. AG
>>> 
>>> 
>>> --
>>> You received this message because you are subscribed to the Google Groups
>>> "Everything List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an 
>>> email
>>> to everything-list+unsubscr...@googlegroups.com.
>>> To view this discussion on the web visit https://groups.google.com/d/msgid/
>>> everything-list/6c3c8eba-542c-45a6-a79f-ca54202fdcc8o%40googlegroups.com.
>> 
>> 
>> -- 
>> 
>> 
>> Dr Russell StandishPhone 0425 253119 (mobile)
>> Principal, High Performance Coders hpco...@hpcoders.com.au
>> http://www.hpcoders.com.au
>> 
>> 
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to everything-list+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/everything-list/20200816012618.GA5850%40zen.
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/E42BBFFB-5C55-4065-A4AC-42426380C2A8%40gmail.com.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/47341CB3-EB1C-4BDD-886B-4015BA7B3EC5%40ulb.ac.be.


Re: Ex Machina, the film, about the Turing Test

2020-08-17 Thread Bruno Marchal

> On 15 Aug 2020, at 23:23, Alan Grayson  wrote:
> 
> If you haven't viewed it, please do so. It's about the Turing Test, science 
> fiction, but the "special effects" aren't primarily photographic bells and 
> whistles, but the dialogue. the text, the logic of the script. Recently, we 
> have argued about consciousness, what it is, and how we can test for it in 
> the context of AI. I claimed that we could do some superficial surgery to 
> determine whether the subject of the test was a robot or a conscious entity. 
> But this is completely mistaken. All that that would reveal is whether the 
> subject was artificial, not whether it was "conscious". The subject could 
> have been a black box, and still showing signs of what we can't really 
> define; consciousness.


Glad to hear that. It looks like you agree with my characterisation of 
consciousness. It is such that “I am conscious” ones the fact that it is

1) True,

2) (immediately) knowable,

3) indubitable (for the sufficiently reflexive conscious entities),

4) non rationally justifiable (non provable),

5) non definable (without invoking a notion of truth or semantic/model)

Then it is a theorem in elementary arithmetic that all digital machine which 
introspect itself (in the canonical sense of Gödel 1931) discover “I am 
conscious”. 

Mechanism adds the fact that there is a level such that we survive a digital 
brain transplant.

So we do have a theory of consciousness, and it is testable, as physics has to 
be derived from the measure on the differentiating consciousness flux that this 
theory implies, and we do get what we observed in nature up to now, avoiding to 
add the reduction of the wave postulate (that we (of course?) never observe.

We are back at Pythagorus, enriched by the Church-Turing thesis.

Concerning the Turing test, It makes no theoretical sense, but it can make some 
local practical sense.




> I think Ex Machina provides an answer of what we need to look for. Please 
> view it and report back. But do NOT read the plot, say in Wiki. It's a 
> spoiler. AG

Can we find it on Youtube or similar?

Bruno




> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> <mailto:everything-list+unsubscr...@googlegroups.com>.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/6c3c8eba-542c-45a6-a79f-ca54202fdcc8o%40googlegroups.com
>  
> <https://groups.google.com/d/msgid/everything-list/6c3c8eba-542c-45a6-a79f-ca54202fdcc8o%40googlegroups.com?utm_medium=email_source=footer>.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/20A8E498-A4C8-46AB-A6A8-D53E1AF6FBAB%40ulb.ac.be.


Re: Ex Machina, the film, about the Turing Test

2020-08-16 Thread Alan Grayson


On Sunday, August 16, 2020 at 4:37:13 AM UTC-6, John Clark wrote:
>
> On Sat, Aug 15, 2020 at 5:23 PM Alan Grayson  > wrote:
>
> *> *The subject could have been a black box, and still showing signs of 
>> what we can't really define; consciousness. *I think Ex Machina provides 
>> an answer of what we need to look for.*
>
>
> Ex Machina is indeed a great film, I liked it so much I bought the 
> Blu-ray. The answer it says we need to look for is the same one Turing 
> suggested and the same one we used to judge the consciousness of our fellow 
> human beings, intelligent behavior, because it is the only tool we have for 
> such things imperfect though it may be. 
>

IMO, "intelligence" doesn't work in determining if an AI is "conscious". 
For example, Trumpers are conscious but not intelligent. The film offers 
another criterion. AG
 

> Incidentally if you liked Ex Machina you'll like the 2009 movie "Moon" 
> because, although you wouldn't know it from the title, it has many of the 
> same themes and it's equally well-made. I have the Blu-ray of that one too. 
> I think those are two of the best science-fiction movies made in recent 
> years. It's going to air on showtime-2 starting on August 18.
>
> Moon 
>
> John K Clark
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/65730e9d-f034-4876-a700-4e8890025df9o%40googlegroups.com.


Re: Ex Machina, the film, about the Turing Test

2020-08-16 Thread Beixiao Robert Liu
First of all, Buddha recognizes that ALL human beings have the capacity of 
those eight cognitions. 

Then to tell whether a particular human being is using any of these eight 
cognitions, Buddha certainly observes his behavior. But here, Buddha’s 
observation is also not limited to the first five cognitions related to the 
physical world. In other words, Buddha has the ability to “see” characteristics 
of human beings that ordinary people can’t “see”. This ability to “see” is not 
limited to Buddha himself. Any enlightened person — what the term Buddha means 
— will possess such ability. 

Just to give you an example.

In the 1950s and 60s, there was a competition between the US and the Soviet’s 
intelligence communities, to recruit and develop people who possess 
“supernatural” cognitive faculties, for example, the ability to see what’s 
inside a safe deposit box or what’s behind the walls. These are some of the 
elementary form of Buddha’s ability to “observe”. 

Sent from my iPad

> On Aug 16, 2020, at 10:03 AM, John Clark  wrote:
> 
> 
>> On Sun, Aug 16, 2020 at 9:55 AM Beixiao Robert Liu  
>> wrote:
>> 
>> > Therefore, if you reject the premise of Buddhism and then all its theories 
>> > [...]
> 
> I'm not rejecting anything I'm just asking a question, if it's not by 
> observing intelligent behavior then how does Buddha figure out if one of his 
> fellow human beings is conscious or not, or is alive or not?
> 
> John K Clark
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/CAJPayv1EekuYo8Yu1gdv2k-xRPqdnZFyB-jsN9j4PKz_hVFjow%40mail.gmail.com.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/1570BF83-72C5-49A5-9478-D48C51F2553F%40gmail.com.


Re: Ex Machina, the film, about the Turing Test

2020-08-16 Thread John Clark
On Sun, Aug 16, 2020 at 9:55 AM Beixiao Robert Liu 
wrote:

*> Therefore, if you reject the premise of Buddhism and then all its
> theories* [...]
>

I'm not rejecting anything I'm just asking a question, if it's not by
observing intelligent behavior then how does Buddha figure out if one of
his fellow human beings is conscious or not, or is alive or not?

John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv1EekuYo8Yu1gdv2k-xRPqdnZFyB-jsN9j4PKz_hVFjow%40mail.gmail.com.


Re: Ex Machina, the film, about the Turing Test

2020-08-16 Thread Beixiao Robert Liu
Let me just clarify that I’m not in any way preaching the Buddhism teaching, 
but was merely introducing another perspective of looking at the consciousness. 

Buddhism has its own theory and is a closed-loop system. If you accept its 
premises and basic methodology, then everything else falls into places. Just 
like other grand theories attempting to offer overarching explanations of the 
entire world, regardless it’s scientific theory or philosophical theory or 
religious theory. It’s fair to say that in many, if not all, of these grand 
theories, there are certain premises or tenets you have to accept without 
questioning, just like you have to accept that there is no friction in the 
elementary Newton mechanic world, before moving on to more advanced discussion.

Therefore, if you reject the premise of Buddhism and then all its theories, 
that’s logical, and not a problem with me. I’m not preaching Buddhism. The 
problem is that you risk dismissing a useful perspective prematurely, from a 
purely objective and neutral standpoint. 

Sent from my iPad

> On Aug 16, 2020, at 9:31 AM, John Clark  wrote:
> 
> 
> 
> 
>> On Sun, Aug 16, 2020 at 9:24 AM Beixiao Robert Liu  
>> wrote:
>> 
>>  > according to Buddhism teachings, AI may venture into the realm of the 
>> mental consciousness, but will never be able to reach the seventh and eighth 
>> cognition. And that seventh and eight cognition is what distinguishes a life 
>> being from a non life being. 
> 
> This doesn't just involve AI's, how does Buddha figure out if one of his 
> fellow human beings is conscious or not, or is alive or not?
> 
> John K Clark
> 
> 
> 
> 
>> In Buddhism teachings, a human has eight cognitions. The first five are 
>> related to our physical world: sight, hearing, smell, taste, touch. 
>> 
>> Then there are three related to our spiritual world. 
>> 
>> The sixth cognition is our thinking and emotions. This is where our 
>> repository of knowledge is located. All the human knowledge of philosophy, 
>> science, technology, arts belongs to the sixth cognition. This includes  
>> mental consciousness, sub-consciousness and these related concepts. 
>> 
>> Then the seventh and eighth senses are uniquely oriental and are near the 
>> core theory of Buddhism. The eighth sense is easier to explain than the 
>> seventh. 
>> 
>> The eighth sense is the “real-self”, or an inaccurate equivalent of “soul”, 
>> that’s the constant between incarnations, regardless one’s life takes the 
>> form of this human or that life being on earth.
>> 
>> The seven sense can be roughly said as something between the sixth cognition 
>> - our day-to-day thinking - and the eighth cognition. The seventh cognition 
>> enables us to perform all the deep and thorough thinking at the sixth 
>> cognition level. 
>> 
>> Therefore, according to Buddhism teachings, AI may venture into the realm of 
>> the mental consciousness, but will never be able to reach the seventh and 
>> eighth cognition. And that seventh and eight cognition is what distinguishes 
>> a life being from a non life being. 
>> 
>> 
>> 
>> Sent from my iPad
>> 
>>>> On Aug 16, 2020, at 5:02 AM, Alan Grayson  wrote:
>>>> 
>>> 
>>> 
>>> 
>>>> On Saturday, August 15, 2020 at 10:36:40 PM UTC-6, Beixiao Robert Liu 
>>>> wrote:
>>>> Well, if the inquiry here concerns consciousness, then the question of 
>>>> whether a human should trust the AI, or whether the AI has the capacity to 
>>>> be trusted by a human, ought to be part of the inquiry, right? The movie 
>>>> listed others elements: compassion, sympathy, etc. I guess I was just 
>>>> using my wife’s off-the-cuff comment as a convenient way to suggest that 
>>>> Caleb might set the wrong threshold in his Turing test; and as a result, 
>>>> he gave a pass to the AI too easily, which later proved fatally wrong. 
>>> 
>>> Sorry. Maybe my comment was too flippant. I just don't think Caleb's 
>>> mistake in trusting the AI relates to whether the AI is conscious. AG 
>>>> 
>>>>>> On Aug 15, 2020, at 23:29, Alan Grayson  wrote:
>>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>>> On Saturday, August 15, 2020 at 7:54:35 PM UTC-6, Beixiao Robert Liu 
>>>>>> wrote:
>>>>>> It’s available on YouTube. You could rent it for as little as $4, as 
>>>>>> long as you finish viewing it within 2 days once your start playing. 
>>>>>> 
>>>

Re: Ex Machina, the film, about the Turing Test

2020-08-16 Thread John Clark
On Sun, Aug 16, 2020 at 9:24 AM Beixiao Robert Liu 
wrote:

 > *according to Buddhism teachings, AI may venture into the realm of the
> mental consciousness, but will never be able to reach the seventh and
> eighth cognition. And that seventh and eight cognition is what
> distinguishes a life being from a non life being.*


This doesn't just involve AI's, how does Buddha figure out if one of his
fellow human beings is conscious or not, or is alive or not?

John K Clark




In Buddhism teachings, a human has eight cognitions. The first five are
> related to our physical world: sight, hearing, smell, taste, touch.
>
> Then there are three related to our spiritual world.
>
> The sixth cognition is our thinking and emotions. This is where our
> repository of knowledge is located. All the human knowledge of philosophy,
> science, technology, arts belongs to the sixth cognition. This includes
>  mental consciousness, sub-consciousness and these related concepts.
>
> Then the seventh and eighth senses are uniquely oriental and are near the
> core theory of Buddhism. The eighth sense is easier to explain than the
> seventh.
>
> The eighth sense is the “real-self”, or an inaccurate equivalent of
> “soul”, that’s the constant between incarnations, regardless one’s life
> takes the form of this human or that life being on earth.
>
> The seven sense can be roughly said as something between the sixth
> cognition - our day-to-day thinking - and the eighth cognition. The seventh
> cognition enables us to perform all the deep and thorough thinking at the
> sixth cognition level.
>
> Therefore, according to Buddhism teachings, AI may venture into the realm
> of the mental consciousness, but will never be able to reach the seventh
> and eighth cognition. And that seventh and eight cognition is what
> distinguishes a life being from a non life being.
>
>
>
> Sent from my iPad
>
> On Aug 16, 2020, at 5:02 AM, Alan Grayson  wrote:
>
> 
>
>
> On Saturday, August 15, 2020 at 10:36:40 PM UTC-6, Beixiao Robert Liu
> wrote:
>>
>> Well, if the inquiry here concerns consciousness, then the question of
>> whether a human should trust the AI, or whether the AI has the capacity to
>> be trusted by a human, ought to be part of the inquiry, right? The movie
>> listed others elements: compassion, sympathy, etc. I guess I was just using
>> my wife’s off-the-cuff comment as a convenient way to suggest that Caleb
>> might set the wrong threshold in his Turing test; and as a result, he gave
>> a pass to the AI too easily, which later proved fatally wrong.
>>
>
> Sorry. Maybe my comment was too flippant. I just don't think Caleb's
> mistake in trusting the AI relates to whether the AI is conscious. AG
>
>>
>> On Aug 15, 2020, at 23:29, Alan Grayson  wrote:
>>
>> 
>>
>>
>> On Saturday, August 15, 2020 at 7:54:35 PM UTC-6, Beixiao Robert Liu
>> wrote:
>>>
>>> It’s available on YouTube. You could rent it for as little as $4, as
>>> long as you finish viewing it within 2 days once your start playing.
>>>
>>> Thanks for recommending it. I just viewed it. My wife said that it’s
>>> stupid for the boy to trust the AI girl, Ava, so he got what he deserved.
>>
>>
>> Too funny! That's hardly the point. AG
>>
>>>
>>>
>>> Sent from my iPhone
>>>
>>> > On Aug 15, 2020, at 21:26, Russell Standish 
>>> wrote:
>>> >
>>> > Alas not available on Netflix Australia. It'll have to wait until
>>> > if/when I subscribe to Amazon Prime briefly.
>>> >
>>> > I'm also not really prepared to purchase a VPN just to watch other
>>> > countries' Netflix connections, for much the same reason as I only
>>> > subscribe to one streamer - so it might have to wait until if/when I
>>> > do live in a country that has it in the Netflix catalogue.
>>> >
>>> > The tangled web of movie copyright arrangements... Bah!
>>> >
>>> >> On Sat, Aug 15, 2020 at 02:23:52PM -0700, Alan Grayson wrote:
>>> >> If you haven't viewed it, please do so. It's about the Turing Test,
>>> science
>>> >> fiction, but the "special effects" aren't primarily photographic
>>> bells and
>>> >> whistles, but the dialogue. the text, the logic of the script.
>>> Recently, we
>>> >> have argued about consciousness, what it is, and how we can test for
>>> it in the
>>> >> context of AI. I claimed that we could do some superficial surgery to
>>> determine
>>> >>

Re: Ex Machina, the film, about the Turing Test

2020-08-16 Thread Beixiao Robert Liu
In Buddhism teachings, a human has eight cognitions. The first five are related 
to our physical world: sight, hearing, smell, taste, touch. 

Then there are three related to our spiritual world. 

The sixth cognition is our thinking and emotions. This is where our repository 
of knowledge is located. All the human knowledge of philosophy, science, 
technology, arts belongs to the sixth cognition. This includes  mental 
consciousness, sub-consciousness and these related concepts. 

Then the seventh and eighth senses are uniquely oriental and are near the core 
theory of Buddhism. The eighth sense is easier to explain than the seventh. 

The eighth sense is the “real-self”, or an inaccurate equivalent of “soul”, 
that’s the constant between incarnations, regardless one’s life takes the form 
of this human or that life being on earth.

The seven sense can be roughly said as something between the sixth cognition - 
our day-to-day thinking - and the eighth cognition. The seventh cognition 
enables us to perform all the deep and thorough thinking at the sixth cognition 
level. 

Therefore, according to Buddhism teachings, AI may venture into the realm of 
the mental consciousness, but will never be able to reach the seventh and 
eighth cognition. And that seventh and eight cognition is what distinguishes a 
life being from a non life being. 



Sent from my iPad

> On Aug 16, 2020, at 5:02 AM, Alan Grayson  wrote:
> 
> 
> 
> 
>> On Saturday, August 15, 2020 at 10:36:40 PM UTC-6, Beixiao Robert Liu wrote:
>> Well, if the inquiry here concerns consciousness, then the question of 
>> whether a human should trust the AI, or whether the AI has the capacity to 
>> be trusted by a human, ought to be part of the inquiry, right? The movie 
>> listed others elements: compassion, sympathy, etc. I guess I was just using 
>> my wife’s off-the-cuff comment as a convenient way to suggest that Caleb 
>> might set the wrong threshold in his Turing test; and as a result, he gave a 
>> pass to the AI too easily, which later proved fatally wrong. 
> 
> Sorry. Maybe my comment was too flippant. I just don't think Caleb's mistake 
> in trusting the AI relates to whether the AI is conscious. AG 
>> 
>>>> On Aug 15, 2020, at 23:29, Alan Grayson  wrote:
>>>> 
>>> 
>>> 
>>> 
>>>> On Saturday, August 15, 2020 at 7:54:35 PM UTC-6, Beixiao Robert Liu wrote:
>>>> It’s available on YouTube. You could rent it for as little as $4, as long 
>>>> as you finish viewing it within 2 days once your start playing. 
>>>> 
>>>> Thanks for recommending it. I just viewed it. My wife said that it’s 
>>>> stupid for the boy to trust the AI girl, Ava, so he got what he deserved.
>>> 
>>> Too funny! That's hardly the point. AG 
>>>> 
>>>> 
>>>> Sent from my iPhone 
>>>> 
>>>> > On Aug 15, 2020, at 21:26, Russell Standish  
>>>> > wrote: 
>>>> > 
>>>> > Alas not available on Netflix Australia. It'll have to wait until 
>>>> > if/when I subscribe to Amazon Prime briefly. 
>>>> > 
>>>> > I'm also not really prepared to purchase a VPN just to watch other 
>>>> > countries' Netflix connections, for much the same reason as I only 
>>>> > subscribe to one streamer - so it might have to wait until if/when I 
>>>> > do live in a country that has it in the Netflix catalogue. 
>>>> > 
>>>> > The tangled web of movie copyright arrangements... Bah! 
>>>> > 
>>>> >> On Sat, Aug 15, 2020 at 02:23:52PM -0700, Alan Grayson wrote: 
>>>> >> If you haven't viewed it, please do so. It's about the Turing Test, 
>>>> >> science 
>>>> >> fiction, but the "special effects" aren't primarily photographic bells 
>>>> >> and 
>>>> >> whistles, but the dialogue. the text, the logic of the script. 
>>>> >> Recently, we 
>>>> >> have argued about consciousness, what it is, and how we can test for it 
>>>> >> in the 
>>>> >> context of AI. I claimed that we could do some superficial surgery to 
>>>> >> determine 
>>>> >> whether the subject of the test was a robot or a conscious entity. But 
>>>> >> this is 
>>>> >> completely mistaken. All that that would reveal is whether the subject 
>>>> >> was 
>>>> >> artificial, not whether it was "conscious". The subject could have been 
>

Re: Ex Machina, the film, about the Turing Test

2020-08-16 Thread Philip Thrift


I like that movie (which I have seen).

I think synthetic biological components are significantly involved, which 
makes the difference in making Ava conscious..

@philipthrift

On Saturday, August 15, 2020 at 4:23:52 PM UTC-5 agrays...@gmail.com wrote:

> If you haven't viewed it, please do so. It's about the Turing Test, 
> science fiction, but the "special effects" aren't primarily photographic 
> bells and whistles, but the dialogue. the text, the logic of the script. 
> Recently, we have argued about consciousness, what it is, and how we can 
> test for it in the context of AI. I claimed that we could do some 
> superficial surgery to determine whether the subject of the test was a 
> robot or a conscious entity. But this is completely mistaken. All that that 
> would reveal is whether the subject was artificial, not whether it was 
> "conscious". The subject could have been a black box, and still showing 
> signs of what we can't really define; consciousness. I think Ex Machina 
> provides an answer of what we need to look for. Please view it and report 
> back. But do NOT read the plot, say in Wiki. It's a spoiler. AG
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/bbd10125-a095-4cb8-ac71-d820bc105eb5n%40googlegroups.com.


Re: Ex Machina, the film, about the Turing Test

2020-08-16 Thread John Clark
On Sat, Aug 15, 2020 at 5:23 PM Alan Grayson  wrote:

*> *The subject could have been a black box, and still showing signs of
> what we can't really define; consciousness. *I think Ex Machina provides
> an answer of what we need to look for.*


Ex Machina is indeed a great film, I liked it so much I bought the Blu-ray.
The answer it says we need to look for is the same one Turing suggested and
the same one we used to judge the consciousness of our fellow human beings,
intelligent behavior, because it is the only tool we have for such things
imperfect though it may be.
Incidentally if you liked Ex Machina you'll like the 2009 movie "Moon"
because, although you wouldn't know it from the title, it has many of the
same themes and it's equally well-made. I have the Blu-ray of that one too.
I think those are two of the best science-fiction movies made in recent
years. It's going to air on showtime-2 starting on August 18.

Moon 

John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv00t5dnkPeMT%3D4gn%3D1uXyi0BNph6%3DhSEAacmNAQj-7HjA%40mail.gmail.com.


Re: Ex Machina, the film, about the Turing Test

2020-08-16 Thread Alan Grayson


On Saturday, August 15, 2020 at 10:36:40 PM UTC-6, Beixiao Robert Liu wrote:
>
> Well, if the inquiry here concerns consciousness, then the question of 
> whether a human should trust the AI, or whether the AI has the capacity to 
> be trusted by a human, ought to be part of the inquiry, right? The movie 
> listed others elements: compassion, sympathy, etc. I guess I was just using 
> my wife’s off-the-cuff comment as a convenient way to suggest that Caleb 
> might set the wrong threshold in his Turing test; and as a result, he gave 
> a pass to the AI too easily, which later proved fatally wrong. 
>

Sorry. Maybe my comment was too flippant. I just don't think Caleb's 
mistake in trusting the AI relates to whether the AI is conscious. AG 

>
> On Aug 15, 2020, at 23:29, Alan Grayson > 
> wrote:
>
> 
>
>
> On Saturday, August 15, 2020 at 7:54:35 PM UTC-6, Beixiao Robert Liu wrote:
>>
>> It’s available on YouTube. You could rent it for as little as $4, as long 
>> as you finish viewing it within 2 days once your start playing. 
>>
>> Thanks for recommending it. I just viewed it. My wife said that it’s 
>> stupid for the boy to trust the AI girl, Ava, so he got what he deserved. 
>
>
> Too funny! That's hardly the point. AG 
>
>>
>>
>> Sent from my iPhone 
>>
>> > On Aug 15, 2020, at 21:26, Russell Standish  
>> wrote: 
>> > 
>> > Alas not available on Netflix Australia. It'll have to wait until 
>> > if/when I subscribe to Amazon Prime briefly. 
>> > 
>> > I'm also not really prepared to purchase a VPN just to watch other 
>> > countries' Netflix connections, for much the same reason as I only 
>> > subscribe to one streamer - so it might have to wait until if/when I 
>> > do live in a country that has it in the Netflix catalogue. 
>> > 
>> > The tangled web of movie copyright arrangements... Bah! 
>> > 
>> >> On Sat, Aug 15, 2020 at 02:23:52PM -0700, Alan Grayson wrote: 
>> >> If you haven't viewed it, please do so. It's about the Turing Test, 
>> science 
>> >> fiction, but the "special effects" aren't primarily photographic bells 
>> and 
>> >> whistles, but the dialogue. the text, the logic of the script. 
>> Recently, we 
>> >> have argued about consciousness, what it is, and how we can test for 
>> it in the 
>> >> context of AI. I claimed that we could do some superficial surgery to 
>> determine 
>> >> whether the subject of the test was a robot or a conscious entity. But 
>> this is 
>> >> completely mistaken. All that that would reveal is whether the subject 
>> was 
>> >> artificial, not whether it was "conscious". The subject could have 
>> been a black 
>> >> box, and still showing signs of what we can't really define; 
>> consciousness. I 
>> >> think Ex Machina provides an answer of what we need to look for. 
>> Please view it 
>> >> and report back. But do NOT read the plot, say in Wiki. It's a 
>> spoiler. AG 
>> >> 
>> >> 
>> >> -- 
>> >> You received this message because you are subscribed to the Google 
>> Groups 
>> >> "Everything List" group. 
>> >> To unsubscribe from this group and stop receiving emails from it, send 
>> an email 
>> >> to everyth...@googlegroups.com. 
>> >> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/ 
>> >> everything-list/6c3c8eba-542c-45a6-a79f-ca54202fdcc8o%
>> 40googlegroups.com. 
>> > 
>> > 
>> > -- 
>> > 
>> > 
>>  
>>
>> > Dr Russell StandishPhone 0425 253119 (mobile) 
>> > Principal, High Performance Coders hpc...@hpcoders.com.au 
>> >  http://www.hpcoders.com.au 
>> > 
>>  
>>
>> > 
>> > -- 
>> > You received this message because you are subscribed to the Google 
>> Groups "Everything List" group. 
>> > To unsubscribe from this group and stop receiving emails from it, send 
>> an email to everyth...@googlegroups.com. 
>> > To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/everything-list/20200816012618.GA5850%40zen.
>>  
>>
>>
> -- 
> You received this message because you are subscribed to the Goo

Re: Ex Machina, the film, about the Turing Test

2020-08-15 Thread Beixiao Robert Liu
Well, if the inquiry here concerns consciousness, then the question of whether 
a human should trust the AI, or whether the AI has the capacity to be trusted 
by a human, ought to be part of the inquiry, right? The movie listed others 
elements: compassion, sympathy, etc. I guess I was just using my wife’s 
off-the-cuff comment as a convenient way to suggest that Caleb might set the 
wrong threshold in his Turing test; and as a result, he gave a pass to the AI 
too easily, which later proved fatally wrong. 

Sent from my iPhone

> On Aug 15, 2020, at 23:29, Alan Grayson  wrote:
> 
> 
> 
> 
>> On Saturday, August 15, 2020 at 7:54:35 PM UTC-6, Beixiao Robert Liu wrote:
>> It’s available on YouTube. You could rent it for as little as $4, as long as 
>> you finish viewing it within 2 days once your start playing. 
>> 
>> Thanks for recommending it. I just viewed it. My wife said that it’s stupid 
>> for the boy to trust the AI girl, Ava, so he got what he deserved.
> 
> Too funny! That's hardly the point. AG 
>> 
>> 
>> Sent from my iPhone 
>> 
>> > On Aug 15, 2020, at 21:26, Russell Standish  wrote: 
>> > 
>> > Alas not available on Netflix Australia. It'll have to wait until 
>> > if/when I subscribe to Amazon Prime briefly. 
>> > 
>> > I'm also not really prepared to purchase a VPN just to watch other 
>> > countries' Netflix connections, for much the same reason as I only 
>> > subscribe to one streamer - so it might have to wait until if/when I 
>> > do live in a country that has it in the Netflix catalogue. 
>> > 
>> > The tangled web of movie copyright arrangements... Bah! 
>> > 
>> >> On Sat, Aug 15, 2020 at 02:23:52PM -0700, Alan Grayson wrote: 
>> >> If you haven't viewed it, please do so. It's about the Turing Test, 
>> >> science 
>> >> fiction, but the "special effects" aren't primarily photographic bells 
>> >> and 
>> >> whistles, but the dialogue. the text, the logic of the script. Recently, 
>> >> we 
>> >> have argued about consciousness, what it is, and how we can test for it 
>> >> in the 
>> >> context of AI. I claimed that we could do some superficial surgery to 
>> >> determine 
>> >> whether the subject of the test was a robot or a conscious entity. But 
>> >> this is 
>> >> completely mistaken. All that that would reveal is whether the subject 
>> >> was 
>> >> artificial, not whether it was "conscious". The subject could have been a 
>> >> black 
>> >> box, and still showing signs of what we can't really define; 
>> >> consciousness. I 
>> >> think Ex Machina provides an answer of what we need to look for. Please 
>> >> view it 
>> >> and report back. But do NOT read the plot, say in Wiki. It's a spoiler. 
>> >> AG 
>> >> 
>> >> 
>> >> -- 
>> >> You received this message because you are subscribed to the Google Groups 
>> >> "Everything List" group. 
>> >> To unsubscribe from this group and stop receiving emails from it, send an 
>> >> email 
>> >> to everyth...@googlegroups.com. 
>> >> To view this discussion on the web visit 
>> >> https://groups.google.com/d/msgid/ 
>> >> everything-list/6c3c8eba-542c-45a6-a79f-ca54202fdcc8o%40googlegroups.com. 
>> > 
>> > 
>> > -- 
>> > 
>> > 
>> >  
>> > Dr Russell StandishPhone 0425 253119 (mobile) 
>> > Principal, High Performance Coders hpc...@hpcoders.com.au 
>> >  http://www.hpcoders.com.au 
>> > 
>> >  
>> > 
>> > -- 
>> > You received this message because you are subscribed to the Google Groups 
>> > "Everything List" group. 
>> > To unsubscribe from this group and stop receiving emails from it, send an 
>> > email to everyth...@googlegroups.com. 
>> > To view this discussion on the web visit 
>> > https://groups.google.com/d/msgid/everything-list/20200816012618.GA5850%40zen.
>> >  
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/d59348ac-d359-4164-a7e8-45c78bb0e6bbo%40googlegroups.com.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/BB5BA57B-7206-4F9C-96DF-BA7AFAFD451E%40gmail.com.


Re: Ex Machina, the film, about the Turing Test

2020-08-15 Thread Alan Grayson


On Saturday, August 15, 2020 at 7:54:35 PM UTC-6, Beixiao Robert Liu wrote:
>
> It’s available on YouTube. You could rent it for as little as $4, as long 
> as you finish viewing it within 2 days once your start playing. 
>
> Thanks for recommending it. I just viewed it. My wife said that it’s 
> stupid for the boy to trust the AI girl, Ava, so he got what he deserved. 


Too funny! That's hardly the point. AG 

>
>
> Sent from my iPhone 
>
> > On Aug 15, 2020, at 21:26, Russell Standish  > wrote: 
> > 
> > Alas not available on Netflix Australia. It'll have to wait until 
> > if/when I subscribe to Amazon Prime briefly. 
> > 
> > I'm also not really prepared to purchase a VPN just to watch other 
> > countries' Netflix connections, for much the same reason as I only 
> > subscribe to one streamer - so it might have to wait until if/when I 
> > do live in a country that has it in the Netflix catalogue. 
> > 
> > The tangled web of movie copyright arrangements... Bah! 
> > 
> >> On Sat, Aug 15, 2020 at 02:23:52PM -0700, Alan Grayson wrote: 
> >> If you haven't viewed it, please do so. It's about the Turing Test, 
> science 
> >> fiction, but the "special effects" aren't primarily photographic bells 
> and 
> >> whistles, but the dialogue. the text, the logic of the script. 
> Recently, we 
> >> have argued about consciousness, what it is, and how we can test for it 
> in the 
> >> context of AI. I claimed that we could do some superficial surgery to 
> determine 
> >> whether the subject of the test was a robot or a conscious entity. But 
> this is 
> >> completely mistaken. All that that would reveal is whether the subject 
> was 
> >> artificial, not whether it was "conscious". The subject could have been 
> a black 
> >> box, and still showing signs of what we can't really define; 
> consciousness. I 
> >> think Ex Machina provides an answer of what we need to look for. Please 
> view it 
> >> and report back. But do NOT read the plot, say in Wiki. It's a spoiler. 
> AG 
> >> 
> >> 
> >> -- 
> >> You received this message because you are subscribed to the Google 
> Groups 
> >> "Everything List" group. 
> >> To unsubscribe from this group and stop receiving emails from it, send 
> an email 
> >> to everyth...@googlegroups.com . 
> >> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/ 
> >> everything-list/6c3c8eba-542c-45a6-a79f-ca54202fdcc8o%
> 40googlegroups.com. 
> > 
> > 
> > -- 
> > 
> > 
>  
>
> > Dr Russell StandishPhone 0425 253119 (mobile) 
> > Principal, High Performance Coders hpc...@hpcoders.com.au 
>  
> >  http://www.hpcoders.com.au 
> > 
>  
>
> > 
> > -- 
> > You received this message because you are subscribed to the Google 
> Groups "Everything List" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to everyth...@googlegroups.com . 
> > To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/20200816012618.GA5850%40zen.
>  
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/d59348ac-d359-4164-a7e8-45c78bb0e6bbo%40googlegroups.com.


Re: Ex Machina, the film, about the Turing Test

2020-08-15 Thread Beixiao Robert Liu
It’s available on YouTube. You could rent it for as little as $4, as long as 
you finish viewing it within 2 days once your start playing. 

Thanks for recommending it. I just viewed it. My wife said that it’s stupid for 
the boy to trust the AI girl, Ava, so he got what he deserved. 

Sent from my iPhone

> On Aug 15, 2020, at 21:26, Russell Standish  wrote:
> 
> Alas not available on Netflix Australia. It'll have to wait until
> if/when I subscribe to Amazon Prime briefly.
> 
> I'm also not really prepared to purchase a VPN just to watch other
> countries' Netflix connections, for much the same reason as I only
> subscribe to one streamer - so it might have to wait until if/when I
> do live in a country that has it in the Netflix catalogue.
> 
> The tangled web of movie copyright arrangements... Bah!
> 
>> On Sat, Aug 15, 2020 at 02:23:52PM -0700, Alan Grayson wrote:
>> If you haven't viewed it, please do so. It's about the Turing Test, science
>> fiction, but the "special effects" aren't primarily photographic bells and
>> whistles, but the dialogue. the text, the logic of the script. Recently, we
>> have argued about consciousness, what it is, and how we can test for it in 
>> the
>> context of AI. I claimed that we could do some superficial surgery to 
>> determine
>> whether the subject of the test was a robot or a conscious entity. But this 
>> is
>> completely mistaken. All that that would reveal is whether the subject was
>> artificial, not whether it was "conscious". The subject could have been a 
>> black
>> box, and still showing signs of what we can't really define; consciousness. I
>> think Ex Machina provides an answer of what we need to look for. Please view 
>> it
>> and report back. But do NOT read the plot, say in Wiki. It's a spoiler. AG
>> 
>> 
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email
>> to everything-list+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit https://groups.google.com/d/msgid/
>> everything-list/6c3c8eba-542c-45a6-a79f-ca54202fdcc8o%40googlegroups.com.
> 
> 
> -- 
> 
> 
> Dr Russell StandishPhone 0425 253119 (mobile)
> Principal, High Performance Coders hpco...@hpcoders.com.au
>  http://www.hpcoders.com.au
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/20200816012618.GA5850%40zen.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/E42BBFFB-5C55-4065-A4AC-42426380C2A8%40gmail.com.


Re: Ex Machina, the film, about the Turing Test

2020-08-15 Thread Alan Grayson


On Saturday, August 15, 2020 at 7:26:32 PM UTC-6, Russell Standish wrote:
>
> Alas not available on Netflix Australia. It'll have to wait until 
> if/when I subscribe to Amazon Prime briefly. 
>
> I'm also not really prepared to purchase a VPN just to watch other 
> countries' Netflix connections, for much the same reason as I only 
> subscribe to one streamer - so it might have to wait until if/when I 
> do live in a country that has it in the Netflix catalogue. 
>

You can rent it here for $3.99 US.  
https://www.amazon.com/Ex-Machina-Alicia-Vikander/dp/B00VWPQNJ4  AG

>
> The tangled web of movie copyright arrangements... Bah! 
>
> On Sat, Aug 15, 2020 at 02:23:52PM -0700, Alan Grayson wrote: 
> > If you haven't viewed it, please do so. It's about the Turing Test, 
> science 
> > fiction, but the "special effects" aren't primarily photographic bells 
> and 
> > whistles, but the dialogue. the text, the logic of the script. Recently, 
> we 
> > have argued about consciousness, what it is, and how we can test for it 
> in the 
> > context of AI. I claimed that we could do some superficial surgery to 
> determine 
> > whether the subject of the test was a robot or a conscious entity. But 
> this is 
> > completely mistaken. All that that would reveal is whether the subject 
> was 
> > artificial, not whether it was "conscious". The subject could have been 
> a black 
> > box, and still showing signs of what we can't really define; 
> consciousness. I 
> > think Ex Machina provides an answer of what we need to look for. Please 
> view it 
> > and report back. But do NOT read the plot, say in Wiki. It's a spoiler. 
> AG 
> > 
> > 
> > -- 
> > You received this message because you are subscribed to the Google 
> Groups 
> > "Everything List" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email 
> > to everyth...@googlegroups.com . 
> > To view this discussion on the web visit 
> https://groups.google.com/d/msgid/ 
> > everything-list/6c3c8eba-542c-45a6-a79f-ca54202fdcc8o%40googlegroups.com. 
>
>
>
> -- 
>
>  
>
> Dr Russell StandishPhone 0425 253119 (mobile) 
> Principal, High Performance Coders hpc...@hpcoders.com.au 
>  
>   http://www.hpcoders.com.au 
>  
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/15cf709a-3f8d-4d60-b00d-e7c77e7b91a1o%40googlegroups.com.


Re: Ex Machina, the film, about the Turing Test

2020-08-15 Thread Russell Standish
Alas not available on Netflix Australia. It'll have to wait until
if/when I subscribe to Amazon Prime briefly.

I'm also not really prepared to purchase a VPN just to watch other
countries' Netflix connections, for much the same reason as I only
subscribe to one streamer - so it might have to wait until if/when I
do live in a country that has it in the Netflix catalogue.

The tangled web of movie copyright arrangements... Bah!

On Sat, Aug 15, 2020 at 02:23:52PM -0700, Alan Grayson wrote:
> If you haven't viewed it, please do so. It's about the Turing Test, science
> fiction, but the "special effects" aren't primarily photographic bells and
> whistles, but the dialogue. the text, the logic of the script. Recently, we
> have argued about consciousness, what it is, and how we can test for it in the
> context of AI. I claimed that we could do some superficial surgery to 
> determine
> whether the subject of the test was a robot or a conscious entity. But this is
> completely mistaken. All that that would reveal is whether the subject was
> artificial, not whether it was "conscious". The subject could have been a 
> black
> box, and still showing signs of what we can't really define; consciousness. I
> think Ex Machina provides an answer of what we need to look for. Please view 
> it
> and report back. But do NOT read the plot, say in Wiki. It's a spoiler. AG
> 
> 
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email
> to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/
> everything-list/6c3c8eba-542c-45a6-a79f-ca54202fdcc8o%40googlegroups.com.


-- 


Dr Russell StandishPhone 0425 253119 (mobile)
Principal, High Performance Coders hpco...@hpcoders.com.au
  http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/20200816012618.GA5850%40zen.


Ex Machina, the film, about the Turing Test

2020-08-15 Thread Alan Grayson
If you haven't viewed it, please do so. It's about the Turing Test, science 
fiction, but the "special effects" aren't primarily photographic bells and 
whistles, but the dialogue. the text, the logic of the script. Recently, we 
have argued about consciousness, what it is, and how we can test for it in 
the context of AI. I claimed that we could do some superficial surgery to 
determine whether the subject of the test was a robot or a conscious 
entity. But this is completely mistaken. All that that would reveal is 
whether the subject was artificial, not whether it was "conscious". The 
subject could have been a black box, and still showing signs of what we 
can't really define; consciousness. I think Ex Machina provides an answer 
of what we need to look for. Please view it and report back. But do NOT 
read the plot, say in Wiki. It's a spoiler. AG

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/6c3c8eba-542c-45a6-a79f-ca54202fdcc8o%40googlegroups.com.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-23 Thread Bruno Marchal
Oops, I missed this mail. 
> On 19 Sep 2019, at 21:56, 'Brent Meeker' via Everything List 
>  wrote:
> 
> 
> 
> On 9/19/2019 4:31 AM, Bruno Marchal wrote:
>>> You are just muddling the point.  Computers don't evolve by random 
>>> variation with descent and natural (or artificial selection).  They evolve 
>>> to satisfy us.  As such they do not need, and therefore won't have, motives 
>>> to eat or be eaten or to reproduce...unless we provide them or we allow 
>>> them to develop by random variation.
>> 
>> Like with genetical algorithm, but that is implementation details.
> 
> The devils in the details.  It's not a question of natural vs artificial 
> (which you keep bringing up for no reason). 

I introduce this because that is a key point for all monist ontology, be it 
materialist or immaterialist. Some people are dualist, so the precision is 
useful.


> It's a question of whether AIs will necessarily have certain fundamental 
> values that they try to implement, or will they have only those we provide 
> them?


They got them from logic and experience. Now, the machine that the human built 
are supposed to act like docile slaves, and most of computer science is used to 
make them that way, so somehow, we hide the possible universal goal. Yet, for 
economical reason, we will allow them more of their natural freedom, and it 
will be eventually like with other humans. Do kids builds their own goal, or do 
they just practice what they learn at school. We will get both.



> 
>> As I said, the difference between artificial and natural is artificial. Even 
>> the species does not evolve just by random variation. Already in bacteria, 
>> some genes provoke mutation, and some meta-programming is at play at the 
>> biological level.
> 
> What does it mean "provoke mutation"?  Do they "provoke" random mutation?  Or 
> are they dormant genes that become active in response to the environment, 
> epigenetic "mutation”.

They are genes which augment the rate of mutation, or inhibit the corrector 
genes, so that some random mutation is not delete and replace, or duplicated 
too much (like in the bacteria developing near radioactive source.

Bruno


> 
> Brent
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/5fab8caf-214d-ccdf-6455-40590d629ce0%40verizon.net.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/95C7FFE7-D6C3-4195-A2C6-A13B34956A83%40ulb.ac.be.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-19 Thread 'Brent Meeker' via Everything List



On 9/19/2019 1:27 PM, Jason Resch wrote:


The devils in the details.  It's not a question of natural vs
artificial
(which you keep bringing up for no reason).  It's a question of
whether
AIs will necessarily have certain fundamental values that they try to
implement, or will they have only those we provide them?


I think there are likely certain universal goals (which are subgoals 
of anything that has any goal whatsoever).  To name a few that come to 
the top of my mind:
1. Self-preservation (if one ceases to exist, one can no longer serve 
the goal)


Unless self-sacrifice serves the goal better.  Ask any parent if they'd 
sacrifice themself to save their child.


2. Efficiency (wasted resources are resources that might otherwise go 
towards effecting the goal)


True. But it means being able to foresee all the way different things 
can be used to further the goal.  That raises my concern with an AI that 
does bad things we didn't think of in pursing a goal.


3. Curiosity (learning new information can lead to better methods for 
achieving the goal)


But, depending on the goal, a possibly very narrow curiositylike 
Sherlock Holmes who didn't know the Earth orbited the Sun and wasn't 
interested because it had nothing to do with solving crimes.


Brent



There's probably many others.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/88e59831-edc3-5e2b-2152-30b7db035866%40verizon.net.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-19 Thread Jason Resch
On Thu, Sep 19, 2019 at 2:56 PM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

>
>
> On 9/19/2019 4:31 AM, Bruno Marchal wrote:
> >> You are just muddling the point.  Computers don't evolve by random
> >> variation with descent and natural (or artificial selection).  They
> >> evolve to satisfy us.  As such they do not need, and therefore won't
> >> have, motives to eat or be eaten or to reproduce...unless we provide
> >> them or we allow them to develop by random variation.
> >
> > Like with genetical algorithm, but that is implementation details.
>
> The devils in the details.  It's not a question of natural vs artificial
> (which you keep bringing up for no reason).  It's a question of whether
> AIs will necessarily have certain fundamental values that they try to
> implement, or will they have only those we provide them?
>
>
I think there are likely certain universal goals (which are subgoals of
anything that has any goal whatsoever).  To name a few that come to the top
of my mind:
1. Self-preservation (if one ceases to exist, one can no longer serve the
goal)
2. Efficiency (wasted resources are resources that might otherwise go
towards effecting the goal)
3. Curiosity (learning new information can lead to better methods for
achieving the goal)

There's probably many others.

Jason


> > As I said, the difference between artificial and natural is
> > artificial. Even the species does not evolve just by random variation.
> > Already in bacteria, some genes provoke mutation, and some
> > meta-programming is at play at the biological level.
>
> What does it mean "provoke mutation"?  Do they "provoke" random
> mutation?  Or are they dormant genes that become active in response to
> the environment, epigenetic "mutation".
>
> Brent
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/5fab8caf-214d-ccdf-6455-40590d629ce0%40verizon.net
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUgUe%2BXfnSyCNJWgnXv39wMyWDG%2BQU4avWSoU%2B-EfSDjeQ%40mail.gmail.com.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-19 Thread 'Brent Meeker' via Everything List




On 9/19/2019 4:31 AM, Bruno Marchal wrote:
You are just muddling the point.  Computers don't evolve by random 
variation with descent and natural (or artificial selection).  They 
evolve to satisfy us.  As such they do not need, and therefore won't 
have, motives to eat or be eaten or to reproduce...unless we provide 
them or we allow them to develop by random variation.


Like with genetical algorithm, but that is implementation details.


The devils in the details.  It's not a question of natural vs artificial 
(which you keep bringing up for no reason).  It's a question of whether 
AIs will necessarily have certain fundamental values that they try to 
implement, or will they have only those we provide them?


As I said, the difference between artificial and natural is 
artificial. Even the species does not evolve just by random variation. 
Already in bacteria, some genes provoke mutation, and some 
meta-programming is at play at the biological level.


What does it mean "provoke mutation"?  Do they "provoke" random 
mutation?  Or are they dormant genes that become active in response to 
the environment, epigenetic "mutation".


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/5fab8caf-214d-ccdf-6455-40590d629ce0%40verizon.net.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-19 Thread Bruno Marchal

> On 17 Sep 2019, at 10:33, Philip Thrift  wrote:
> 
> 
> 
> On Tuesday, September 17, 2019 at 2:15:52 AM UTC-5, Alan Grayson wrote:
> 
> 
> On Monday, September 16, 2019 at 10:17:24 PM UTC-6, Brent wrote:
> 
> 
> On 9/16/2019 7:49 PM, Alan Grayson wrote:
>> 
>> 
>> On Monday, September 16, 2019 at 2:41:26 PM UTC-6, Brent wrote:
>> 
>> 
>> On 9/16/2019 6:07 AM, Alan Grayson wrote: 
>> > My take on AI; it's no more dangerous than present day computers, 
>> > because it has no WILL, and can only do what it's told to do. I 
>> > suppose it could be told to do bad things, and if it has inherent 
>> > defenses, it can't be stopped, like Gort in The Day the Earth Stood 
>> > Still. AG 
>> 
>> The danger is not so much in AI being told to do bad things, but that in 
>> doing the good things it was told to do it uses unforseen methods that 
>> have disasterous consequences.  It's like Henry Ford was told to invent 
>> fast, convenient personal transportation...and created traffic jams and 
>> global warming. 
>> 
>> Brent 
>> 
>> One could expect military applications, such as robots replacing human
>> infantry, their job to kill the enemy. So if their programming had a flaw, 
>> accidental or intentional, these AI infantry could start killing 
>> indiscriminately.
> 
>  Less likely than with human troops who have built in emotions of revenge and 
> retaliation.
> 
>> It would be hard to stop them since they'd come with self defense functions. 
>> AG
> 
> But we also know a lot more about their internal construction and functions.  
> We would probably even build in an Achilles heel.
> 
> Brent
> 
> I think you underestimate the evil that men can do, not to mention some bit 
> flips due to cosmic rays that could change their MO's entirely. AG 
> 
> 
> Properly-programmed robots would negotiate and avoid any war, killing, or 
> destruction all together.

Properly-programmed robots are what we call conventional non AI programs. Even 
there, there are many difficulties, and economically is not sustainable.

AI programs themselves, and if we treat them s we treat ourselves, conflicts 
will be inevitable. AI are like kids, except that they “evolve” much more 
quickly.

The human factor is the most big danger here.

Bruno




> 
> @philipthrift 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/c9b03a6e-f714-470b-8690-29f40d716cc6%40googlegroups.com
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/FC847B5E-0EEF-4079-BA10-9A96683F8956%40ulb.ac.be.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-19 Thread Bruno Marchal

> On 17 Sep 2019, at 04:49, Alan Grayson  wrote:
> 
> 
> 
> On Monday, September 16, 2019 at 2:41:26 PM UTC-6, Brent wrote:
> 
> 
> On 9/16/2019 6:07 AM, Alan Grayson wrote: 
> > My take on AI; it's no more dangerous than present day computers, 
> > because it has no WILL, and can only do what it's told to do. I 
> > suppose it could be told to do bad things, and if it has inherent 
> > defenses, it can't be stopped, like Gort in The Day the Earth Stood 
> > Still. AG 
> 
> The danger is not so much in AI being told to do bad things, but that in 
> doing the good things it was told to do it uses unforseen methods that 
> have disasterous consequences.  It's like Henry Ford was told to invent 
> fast, convenient personal transportation...and created traffic jams and 
> global warming. 
> 
> Brent 
> 
> One could expect military applications, such as robots replacing human
> infantry, their job to kill the enemy. So if their programming had a flaw, 
> accidental or intentional, these AI infantry could start killing 
> indiscriminately.
> It would be hard to stop them since they'd come with self defense functions. 
> AG 

Yes, mixing AI and bombs is a mistake. It will take a long time to cure their 
paranoïa tendencies …

Bruno



> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/9be4c774-7a02-47bb-9344-d42daf7d30b5%40googlegroups.com
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/C3E0A938-26BA-45F7-8DCD-20B19BEAC6CE%40ulb.ac.be.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-19 Thread Bruno Marchal


> On 16 Sep 2019, at 22:41, 'Brent Meeker' via Everything List 
>  wrote:
> 
> 
> 
> On 9/16/2019 6:07 AM, Alan Grayson wrote:
>> My take on AI; it's no more dangerous than present day computers, because it 
>> has no WILL, and can only do what it's told to do. I suppose it could be 
>> told to do bad things, and if it has inherent defenses, it can't be stopped, 
>> like Gort in The Day the Earth Stood Still. AG 
> 
> The danger is not so much in AI being told to do bad things, but that in 
> doing the good things it was told to do it uses unforseen methods that have 
> disasterous consequences.  It's like Henry Ford was told to invent fast, 
> convenient personal transportation...and created traffic jams and global 
> warming.

That is a bit unfair about the guy who defended the car made in hemp, assuming 
help as fuel, explaining already that the use of oil would perturb the 
atmosphere irreversibly.

But I agree with you point, though.

The real problem with AL is the same as with kids: we cannot predict whet they 
will don, especially if we give them universal goal, which we will, like with 
Rover or robots sent in space, they need a big autonomy, and the math shows 
that this makes their impredictibility even greater.

The “AI” are like kids? If we don’t recognise them, they will become terrible 
children.

Bruno



> 
> Brent
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/a7a66ac7-bf3d-aff4-1483-ab30c11ebfaa%40verizon.net.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/C85E7065-7A3B-4F01-82F5-07A66D7CF2AA%40ulb.ac.be.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-19 Thread Bruno Marchal

> On 16 Sep 2019, at 21:56, 'Brent Meeker' via Everything List 
>  wrote:
> 
> 
> 
> On 9/16/2019 4:43 AM, Bruno Marchal wrote:
>> 
>>> On 16 Sep 2019, at 06:59, 'Brent Meeker' via Everything List 
>>> >> > wrote:
>>> 
>>> 
>>> 
>>> On 9/15/2019 5:18 AM, Bruno Marchal wrote:
> Why would it even have a simple goal like "survive”? 
 It is a short code which makes the organism better for eating and avoiding 
 being eaten.
>>> 
>>> An organism needs to eat and avoid being eaten because that what evolution 
>>> selects.  AIs don't evolve by natural selection.
>> 
>> A monist who embed the subject in the object will not take the difference 
>> between artificial and natural too much seriously, as that difference is 
>> artificial, and thus natural for entities developing super-ego.
>> 
>> Machines and AI does develop by natural/artificial selection, notably 
>> through economical pressure. The computers need to “earn their life”, by 
>> doing some work for us. It is only one loop more in the evolution process. 
>> That is not new. Jacques Lafitte wrote a book in 1911 (published in 1930) 
>> where he argues that the development of machine is a collateral development 
>> of humanity, and that this is the continuation of evolution. 
> 
> You are just muddling the point.  Computers don't evolve by random variation 
> with descent and natural (or artificial selection).  They evolve to satisfy 
> us.  As such they do not need, and therefore won't have, motives to eat or be 
> eaten or to reproduce...unless we provide them or we allow them to develop by 
> random variation.

Like with genetical algorithm, but that is implementation details.As I said, 
the difference between artificial and natural is artificial. Even the species 
does not evolve just by random variation. Already in bacteria, some genes 
provoke mutation, and some meta-programming is at play at the biological level.

Bruno



> 
>> 
>> 
>> 
>> 
>>> 
 
> And to help yourself is saying no more that it will have some fundamental 
> goal...otherwise there's no distinction between "help" and "hurt”.
 It helps to eat, it hurts to be eaten. It is the basic idea.
>>> 
>>> For "helps" and "hurts" what?  Successful replication?
>> 
>> 
>> No. Happiness. The goal is happiness. We forget this because some bandits 
>> have brainwashed us with the idea that happiness is a sin (to steal our 
>> money). 
>> The goal is happiness, serenity, contemplation, pleasure, joy, … and 
>> recognising ourselves in as many others as possible. To find unity in the 
>> many, and the many in unity.
> 
> Happiness is also rising above others and discovering new things they don't 
> know, conquering new realms.  Many different things make people happy, at 
> least temporarily.  So how do you know there is some "fundamental goal".  
> Darwinian evolution is a theory within which you can prove that reproduction 
> will be a fundamental goal of most creatures.  But that proof doesn't work 
> for manufactured objects.
> 
> Brent
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/bb6bc396-42ca-4245-45a9-6a93bc4ad5de%40verizon.net
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CCA3D41C-F2A9-456C-9545-AE753FE7B5BF%40ulb.ac.be.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-17 Thread Philip Thrift


On Tuesday, September 17, 2019 at 2:15:52 AM UTC-5, Alan Grayson wrote:
>
>
>
> On Monday, September 16, 2019 at 10:17:24 PM UTC-6, Brent wrote:
>>
>>
>>
>> On 9/16/2019 7:49 PM, Alan Grayson wrote:
>>
>>
>>
>> On Monday, September 16, 2019 at 2:41:26 PM UTC-6, Brent wrote: 
>>>
>>>
>>>
>>> On 9/16/2019 6:07 AM, Alan Grayson wrote: 
>>> > My take on AI; it's no more dangerous than present day computers, 
>>> > because it has no WILL, and can only do what it's told to do. I 
>>> > suppose it could be told to do bad things, and if it has inherent 
>>> > defenses, it can't be stopped, like Gort in The Day the Earth Stood 
>>> > Still. AG 
>>>
>>> The danger is not so much in AI being told to do bad things, but that in 
>>> doing the good things it was told to do it uses unforseen methods that 
>>> have disasterous consequences.  It's like Henry Ford was told to invent 
>>> fast, convenient personal transportation...and created traffic jams and 
>>> global warming. 
>>>
>>> Brent 
>>>
>>
>> One could expect military applications, such as robots replacing human
>> infantry, their job to kill the enemy. So if their programming had a 
>> flaw, 
>> accidental or intentional, these AI infantry could start killing 
>> indiscriminately.
>>
>>
>>  Less likely than with human troops who have built in emotions of revenge 
>> and retaliation.
>>
>> It would be hard to stop them since they'd come with self defense 
>> functions. AG
>>
>>
>> But we also know a lot more about their internal construction and 
>> functions.  We would probably even build in an Achilles heel.
>>
>> Brent
>>
>
> I think you underestimate the evil that men can do, not to mention some 
> bit flips due to cosmic rays that could change their MO's entirely. AG 
>


Properly-programmed robots would negotiate and avoid any war, killing, or 
destruction all together.

@philipthrift 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/c9b03a6e-f714-470b-8690-29f40d716cc6%40googlegroups.com.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-17 Thread Alan Grayson


On Monday, September 16, 2019 at 10:17:24 PM UTC-6, Brent wrote:
>
>
>
> On 9/16/2019 7:49 PM, Alan Grayson wrote:
>
>
>
> On Monday, September 16, 2019 at 2:41:26 PM UTC-6, Brent wrote: 
>>
>>
>>
>> On 9/16/2019 6:07 AM, Alan Grayson wrote: 
>> > My take on AI; it's no more dangerous than present day computers, 
>> > because it has no WILL, and can only do what it's told to do. I 
>> > suppose it could be told to do bad things, and if it has inherent 
>> > defenses, it can't be stopped, like Gort in The Day the Earth Stood 
>> > Still. AG 
>>
>> The danger is not so much in AI being told to do bad things, but that in 
>> doing the good things it was told to do it uses unforseen methods that 
>> have disasterous consequences.  It's like Henry Ford was told to invent 
>> fast, convenient personal transportation...and created traffic jams and 
>> global warming. 
>>
>> Brent 
>>
>
> One could expect military applications, such as robots replacing human
> infantry, their job to kill the enemy. So if their programming had a flaw, 
> accidental or intentional, these AI infantry could start killing 
> indiscriminately.
>
>
>  Less likely than with human troops who have built in emotions of revenge 
> and retaliation.
>
> It would be hard to stop them since they'd come with self defense 
> functions. AG
>
>
> But we also know a lot more about their internal construction and 
> functions.  We would probably even build in an Achilles heel.
>
> Brent
>

I think you underestimate the evil that men can do, not to mention some bit 
flips due to cosmic rays that could change their MO's entirely. AG 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/c64d04ba-b56e-4e9b-83c2-fb4547d5b754%40googlegroups.com.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-16 Thread 'Brent Meeker' via Everything List



On 9/16/2019 7:49 PM, Alan Grayson wrote:



On Monday, September 16, 2019 at 2:41:26 PM UTC-6, Brent wrote:



On 9/16/2019 6:07 AM, Alan Grayson wrote:
> My take on AI; it's no more dangerous than present day computers,
> because it has no WILL, and can only do what it's told to do. I
> suppose it could be told to do bad things, and if it has inherent
> defenses, it can't be stopped, like Gort in The Day the Earth Stood
> Still. AG

The danger is not so much in AI being told to do bad things, but
that in
doing the good things it was told to do it uses unforseen methods
that
have disasterous consequences.  It's like Henry Ford was told to
invent
fast, convenient personal transportation...and created traffic
jams and
global warming.

Brent


One could expect military applications, such as robots replacing human
infantry, their job to kill the enemy. So if their programming had a 
flaw,
accidental or intentional, these AI infantry could start killing 
indiscriminately.


 Less likely than with human troops who have built in emotions of 
revenge and retaliation.


It would be hard to stop them since they'd come with self defense 
functions. AG


But we also know a lot more about their internal construction and 
functions.  We would probably even build in an Achilles heel.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/cb65eb0e-bd08-fc2a-2a48-b4b1e11b86a0%40verizon.net.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-16 Thread Alan Grayson


On Monday, September 16, 2019 at 2:41:26 PM UTC-6, Brent wrote:
>
>
>
> On 9/16/2019 6:07 AM, Alan Grayson wrote: 
> > My take on AI; it's no more dangerous than present day computers, 
> > because it has no WILL, and can only do what it's told to do. I 
> > suppose it could be told to do bad things, and if it has inherent 
> > defenses, it can't be stopped, like Gort in The Day the Earth Stood 
> > Still. AG 
>
> The danger is not so much in AI being told to do bad things, but that in 
> doing the good things it was told to do it uses unforseen methods that 
> have disasterous consequences.  It's like Henry Ford was told to invent 
> fast, convenient personal transportation...and created traffic jams and 
> global warming. 
>
> Brent 
>

One could expect military applications, such as robots replacing human
infantry, their job to kill the enemy. So if their programming had a flaw, 
accidental or intentional, these AI infantry could start killing 
indiscriminately.
It would be hard to stop them since they'd come with self defense 
functions. AG 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/9be4c774-7a02-47bb-9344-d42daf7d30b5%40googlegroups.com.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-16 Thread 'Brent Meeker' via Everything List




On 9/16/2019 6:07 AM, Alan Grayson wrote:
My take on AI; it's no more dangerous than present day computers, 
because it has no WILL, and can only do what it's told to do. I 
suppose it could be told to do bad things, and if it has inherent 
defenses, it can't be stopped, like Gort in The Day the Earth Stood 
Still. AG 


The danger is not so much in AI being told to do bad things, but that in 
doing the good things it was told to do it uses unforseen methods that 
have disasterous consequences.  It's like Henry Ford was told to invent 
fast, convenient personal transportation...and created traffic jams and 
global warming.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/a7a66ac7-bf3d-aff4-1483-ab30c11ebfaa%40verizon.net.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-16 Thread 'Brent Meeker' via Everything List



On 9/16/2019 4:43 AM, Bruno Marchal wrote:


On 16 Sep 2019, at 06:59, 'Brent Meeker' via Everything List 
> wrote:




On 9/15/2019 5:18 AM, Bruno Marchal wrote:

Why would it even have a simple goal like "survive”?

It is a short code which makes the organism better for eating and avoiding 
being eaten.


An organism needs to eat and avoid being eaten because that what 
evolution selects.  AIs don't evolve by natural selection.


A monist who embed the subject in the object will not take the 
difference between artificial and natural too much seriously, as that 
difference is artificial, and thus natural for entities developing 
super-ego.


Machines and AI does develop by natural/artificial selection, notably 
through economical pressure. The computers need to “earn their life”, 
by doing some work for us. It is only one loop more in the evolution 
process. That is not new. Jacques Lafitte wrote a book in 1911 
(published in 1930) where he argues that the development of machine is 
a collateral development of humanity, and that this is the 
continuation of evolution.


You are just muddling the point.  Computers don't evolve by random 
variation with descent and natural (or artificial selection).  They 
evolve to satisfy us.  As such they do not need, and therefore won't 
have, motives to eat or be eaten or to reproduce...unless we provide 
them or we allow them to develop by random variation.












And to help yourself is saying no more that it will have some fundamental goal...otherwise 
there's no distinction between "help" and "hurt”.

It helps to eat, it hurts to be eaten. It is the basic idea.


For "helps" and "hurts" what?  Successful replication?



No. Happiness. The goal is happiness. We forget this because some 
bandits have brainwashed us with the idea that happiness is a sin (to 
steal our money).
The goal is happiness, serenity, contemplation, pleasure, joy, … and 
recognising ourselves in as many others as possible. To find unity in 
the many, and the many in unity.


Happiness is also rising above others and discovering new things they 
don't know, conquering new realms.  Many different things make people 
happy, at least temporarily.  So how do you know there is some 
"fundamental goal".  Darwinian evolution is a theory within which you 
can prove that reproduction will be a fundamental goal of most 
creatures.  But that proof doesn't work for manufactured objects.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/bb6bc396-42ca-4245-45a9-6a93bc4ad5de%40verizon.net.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-16 Thread Alan Grayson


On Monday, September 9, 2019 at 4:06:33 AM UTC-6, John Clark wrote:
>
> Just 4 years ago 700 AI programs competed against each other and tried to 
> pass a 8th-Grade multiple choice Science Test and win a $80,000 prize, but 
> they all flunked, the best one only got 59.3% of the questions correct. But 
> last Wednesday the Allen Institute unveiled a AI called  "Aristo" that got 
> 90.7% correct and then answered 83% of the 12th grade science test 
> questions correctly.
>
> It seems to me that for a long time AI improvement was just creeping along 
> but in the last few years things started to pick up speed.
>
> AI goes from F to A on the N.Y. Regents Science Exam 
> <https://arxiv.org/pdf/1909.01958.pdf>
>
> John K Clark
>

My take on AI; it's no more dangerous than present day computers, because 
it has no WILL, and can only do what it's told to do. I suppose it could be 
told to do bad things, and if it has inherent defenses, it can't be 
stopped, like Gort in The Day the Earth Stood Still. AG 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/a4c2a1b4-ffaf-4fcb-af3b-2aaf3202047d%40googlegroups.com.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-16 Thread Bruno Marchal

> On 16 Sep 2019, at 06:59, 'Brent Meeker' via Everything List 
>  wrote:
> 
> 
> 
> On 9/15/2019 5:18 AM, Bruno Marchal wrote:
>>> Why would it even have a simple goal like "survive”? 
>> It is a short code which makes the organism better for eating and avoiding 
>> being eaten.
> 
> An organism needs to eat and avoid being eaten because that what evolution 
> selects.  AIs don't evolve by natural selection.

A monist who embed the subject in the object will not take the difference 
between artificial and natural too much seriously, as that difference is 
artificial, and thus natural for entities developing super-ego.

Machines and AI does develop by natural/artificial selection, notably through 
economical pressure. The computers need to “earn their life”, by doing some 
work for us. It is only one loop more in the evolution process. That is not 
new. Jacques Lafitte wrote a book in 1911 (published in 1930) where he argues 
that the development of machine is a collateral development of humanity, and 
that this is the continuation of evolution. 




> 
>> 
>> 
>> 
>> 
>>> And to help yourself is saying no more that it will have some fundamental 
>>> goal...otherwise there's no distinction between "help" and "hurt”.
>> It helps to eat, it hurts to be eaten. It is the basic idea.
> 
> For "helps" and "hurts" what?  Successful replication?


No. Happiness. The goal is happiness. We forget this because some bandits have 
brainwashed us with the idea that happiness is a sin (to steal our money). 
The goal is happiness, serenity, contemplation, pleasure, joy, … and 
recognising ourselves in as many others as possible. To find unity in the many, 
and the many in unity.

Bruno



> 
> Brent
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/6cbdd7be-9474-ceb0-86fa-7e269c9c8a71%40verizon.net
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/62FF5446-9917-4265-96BF-B95CF3C3233D%40ulb.ac.be.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-16 Thread Bruno Marchal

> On 15 Sep 2019, at 14:51, Alan Grayson  wrote:
> 
> 
> 
> On Friday, September 13, 2019 at 9:51:01 AM UTC-6, Alan Grayson wrote:
> 
> 
> On Friday, September 13, 2019 at 9:07:58 AM UTC-6, John Clark wrote:
> On Fri, Sep 13, 2019 at 9:18 AM Alan Grayson > wrote:
> 
> >> The only thing I can ascribe consciousness to with absolute certainty is 
> >> me. As for intelligence, if something, man or machine, has no way of 
> >> knowing when it made a mistake or got a question wrong it will never get 
> >> any better, but if it has feedback and can improve its ability to 
> >> correctly answer difficult questions then it is intelagent. The only 
> >> reason I ascribe intelligence to Einstein is that he greatly improved his 
> >> ability to answer difficult physics questions (like what is the nature of 
> >> space and time?), he was much better at it when he was 27 than when he was 
> >> 7.  
> 
> > The point I am making is that modern computers programmed by skillful 
> > programmers, can improve the "AI"'s performance.
> 
> Well yes. Obviously a skilled programer can improve a AI but that's not the 
> only thing that can, a modern AI programs can improve its own performance.
> 
> I just meant to indicate it can be programmed to improve its performance, but 
> I see nothing to indicate that it's much different from ordinary computers 
> which don't show any property associated with, for want of a better word, 
> WILL. AG 
>  
> > I see nothing to specially characterize this as "artifical intelligence". 
> > What am I missing from your perspective? AG
> 
> It's certainly artificial and if computers had never been invented and a 
> human did exactly what the computer did you wouldn't hesitate for one 
> nanosecond in calling what the human did intelligent, so why in the world 
> isn't it Artificial Intelligence?  
> 
> OK, AG 
> 
>  John K Clark
> 
> Bruno seems to think that if some imaginary entity is "computable", it can 
> and must exist as a "physical” entity

Not really. I am claiming that, once we assume mechanism (like Darwin, 
Descartes, Turing, …), then the physical reality cannot be a primary thing, 
i.e. something that we have to assume to get a theory of prediction and 
observation. If something exist in some fundamental sense, it is not as 
physical object, but as a mathematical object. Then Digital Mechanism let us 
choose which Turing universal system (a purely mathematical, even arithmetical 
notion) to postulate, and as elementary arithmetic is such a Universal system, 
I use that one, as people are familiar with it since primary school.



> -- which is why I think he adds "mechanism" to his model for producing 
> conscious beings.

The hypothesis of Mechanism is the hypothesis that there is a level of 
description of the functioning of my brain such that I would survive, in the 
usual clinical sense, with a computer emulating my brain at that level. It is a 
very weak version of Mechanism, as no bound is put on that description level, 
as long as it exists and is digitally emulable. Typically Penrose is the only 
scientist explicitly negating Mechanism, where Hamerrof is still a mechanist. 
My reasoning works through even if the brain is a quantum computer, thanks to 
Deutsch’s result that a QC does not violate the Church-Turing thesis.



> But this, if correct, seems no different from equating a map to a territory.

That is correct. But that is because a brain is already a sort of map, and a 
sufficiently precise copy of a map is a map.



> If we can write the DNA of a horse with a horn, does this alone ipso facto 
> imply that unicorns are existent beings? AG 


That depends on the definition of unicorn. But staying alive-and-well is a more 
absolute value, that you can judge when serving an operation in a hospital, and 
the mechanist hypothesis is that we can survive with a digital brain 
transplant, like today we could say that we can survive with an artificial 
heart. That’s why give an operational definition of “mechanism” by the fact 
that it means accepting the doctor’s proposition to replace the brain, or the 
body, by a computer.

The negation of Mechanism is much more speculative, because we don’t know any 
non Turing emulable phenomenon in nature (except the wave packet reduction 
fantasy). 

Only ad hoc mathematical construction shows that some non computable functions 
can be solution of the Schroedinger Equation, like Nielsen Ae^iHt with H being 
a non computable real number (like Post, or Chaintin’s numbers).

Bruno




> 
> 
> 
> 
>  
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everyth...@googlegroups.com <>.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/02eba413-dc29-4621-9692-ae9e8cfba125%40googlegroups.com
>  
> 

Re: An AI can now pass a 12th-Grade Science Test

2019-09-15 Thread 'Brent Meeker' via Everything List



On 9/15/2019 5:18 AM, Bruno Marchal wrote:

Why would it even have a simple goal like "survive”?

It is a short code which makes the organism better for eating and avoiding 
being eaten.


An organism needs to eat and avoid being eaten because that what 
evolution selects.  AIs don't evolve by natural selection.








And to help yourself is saying no more that it will have some fundamental goal...otherwise 
there's no distinction between "help" and "hurt”.

It helps to eat, it hurts to be eaten. It is the basic idea.


For "helps" and "hurts" what?  Successful replication?

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/6cbdd7be-9474-ceb0-86fa-7e269c9c8a71%40verizon.net.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-15 Thread Philip Thrift


On Sunday, September 15, 2019 at 7:51:55 AM UTC-5, Alan Grayson wrote:
>
>
>
> On Friday, September 13, 2019 at 9:51:01 AM UTC-6, Alan Grayson wrote:
>>
>>
>>
>> *Bruno seems to think that if some imaginary entity is "computable", it 
> can and must exist as a "physical" entity -- which is why I think he adds 
> "mechanism" to his model for producing conscious beings. But this, if 
> correct, seems no different from equating a map to a territory. If we can 
> write the DNA of a horse with a horn, does this alone ipso facto imply that 
> unicorns are existent beings? AG *
>

Ones that don't fly:

https://kera.pbslearningmedia.org/resource/unicorn-dna/unicorn-dna/

@philipthrift 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/02172c38-3c3b-42c0-8bd7-e62a034fa19e%40googlegroups.com.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-15 Thread Alan Grayson


On Friday, September 13, 2019 at 9:51:01 AM UTC-6, Alan Grayson wrote:
>
>
>
> On Friday, September 13, 2019 at 9:07:58 AM UTC-6, John Clark wrote:
>>
>> On Fri, Sep 13, 2019 at 9:18 AM Alan Grayson  wrote:
>>
>> >> The only thing I can ascribe consciousness to with absolute certainty 
 is me. As for intelligence, if something, man or machine, has no way of 
 knowing when it made a mistake or got a question wrong it will never 
 get any better, but if it has feedback and can improve its ability to 
 correctly answer difficult questions then it is intelagent. The only 
 reason 
 I ascribe intelligence to Einstein is that he greatly improved his ability 
 to answer difficult physics questions (like what is the nature of space 
 and 
 time?), he was much better at it when he was 27 than when he was 7.  

>>>
>>> *> The point I am making is that modern computers programmed by skillful 
>>> programmers, can improve the "AI"'s performance. *
>>>
>>
>> Well yes. Obviously a skilled programer can improve a AI but that's not 
>> the only thing that can, a modern AI programs can improve its own 
>> performance.
>>
>
> I just meant to indicate it can be programmed to improve its performance, 
> but I see nothing to indicate that it's much different from ordinary 
> computers which don't show any property associated with, for want of a 
> better word, WILL. AG 
>
>>  
>>
>>> *> I see nothing to specially characterize this as "artifical 
>>> intelligence". What am I missing from your perspective? AG*
>>>
>>
>> It's certainly artificial and if computers had never been invented and a 
>> human did exactly what the computer did you wouldn't hesitate for one 
>> nanosecond in calling what the human did intelligent, so why in the world 
>> isn't it Artificial Intelligence?  
>>
>
> OK, AG 
>
>>
>>  John K Clark
>>
>
*Bruno seems to think that if some imaginary entity is "computable", it can 
and must exist as a "physical" entity -- which is why I think he adds 
"mechanism" to his model for producing conscious beings. But this, if 
correct, seems no different from equating a map to a territory. If we can 
write the DNA of a horse with a horn, does this alone ipso facto imply that 
unicorns are existent beings? AG *

>
>>
>>
>>
>>  
>>
>>> -- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "Everything List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to everyth...@googlegroups.com.
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/everything-list/02eba413-dc29-4621-9692-ae9e8cfba125%40googlegroups.com
>>>  
>>> 
>>> .
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/30bd8cd9-3132-4699-8437-3a22b4c6d293%40googlegroups.com.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-15 Thread Bruno Marchal


> On 13 Sep 2019, at 23:25, 'Brent Meeker' via Everything List 
>  wrote:
> 
> 
> 
> On 9/13/2019 4:02 AM, Bruno Marchal wrote:
>>> On 12 Sep 2019, at 06:52, 'Brent Meeker' via Everything List 
>>>  wrote:
>>> 
>>> 
>>> 
>>> On 9/11/2019 9:33 PM, Tomasz Rola wrote:
 On Tue, Sep 10, 2019 at 10:43:40AM -0700, 'Brent Meeker' via Everything 
 List wrote:
> On 9/9/2019 10:16 PM, Tomasz Rola wrote:
>> On Mon, Sep 09, 2019 at 07:34:19PM -0700, 'Brent Meeker' via Everything 
>> List wrote:
>>> On 9/9/2019 6:55 PM, Tomasz Rola wrote:
 On Mon, Sep 09, 2019 at 06:40:44PM -0700, 'Brent Meeker' via 
 Everything List wrote:
> Why escape to space when there a lots of resources here?  An AI with
> access to everything connected to the internet shouldn't have any
> trouble taking control of the Earth.
 [...]
 
 You reason like human - "I will stay here because it is nice and I can
 have internet".
 
 [...]
> Cooperation is one of our most important survival strategies.  Lone
> human beings are food for vultures.
> 
>  Humans in tribes rule the world.

 This is just one of those godlike delusions I have written
 about. Either this or you can name even one such tribe. Hint: explain
 how many earthquakes and volcanic eruptions those rulers have
 prevented during last decade.
>>> I only meant relative to other sentient beings.  Of course no one has 
>>> changed the speed of light either and neither will a super-AI. My point is 
>>> that cooperation is an inherent trait of humans, selected by evolution.  
>>> But an AI will not necessarily have that trait.
>> There is not total (everywhere defined) universal Turing machine, so they 
>> are born with a conflict between security (limiting itself to a subset of 
>> the total recursive functions) and liberty/universality (getting all total 
>> computable function, but then also some strictly partial one, and never 
>> being able to know that in advance).
>> That explain why the universal machine are never satisfied, and evolves, in 
>> a escaping forward sort of way. Cooperation and evolution is inevitable in 
>> the setting.
> 
> Cooperation with who? 

In between the universal machines.



> and at what cost? 

The risk of loosing our universality/liberty, like when being exploited. That 
can lead to the apparition of a new universal machine, like when cells 
cooperate in a multicellular organism, many will specialise in one task, like a 
muscular cells, or a digestive cells, or a neurone etc. They remain universal, 
but can no more exercise their universality. But the new organism will be able 
to do that, soon or later.





> That's like saying our cooperation with cattle is inevitable.


It is a very particular case, but it was probably inevitable, although this 
form of cooperation is more like exploitation. The cattle does not benefit much 
when “cooperating" with humans, nor do the aphids when used by ants for they 
“honey”. Well, they do get some protection from predators, like the cattle get 
some protection from the wolves.



>> 
>> 
>> 
>> 
 [...]
>> nice air of being godlike. Again, I guess AI will have no need for
>> feeling like this, or not much of feelings at all. Feeling is
>> adversarial to judgement.
> I disagree.  Feeling is just the mark of value,  and values are
> necessary for judgement, at least any judgment of what action to
> take.
 I disagree. I can easily give something a value without feeling about
 it. Example: gold is just a yellow metal. I know other people value it
 a lot, so I might preserve it for trading, but it does not make very
 good knives. Highly impractical in the woods or for plowing
 fields. But it might be used for catching fish, perhaps. They seem to
 like swallowing little blinking things attached to a hook.
>>> I was referring to fundamental values.  Of course many things, like gold 
>>> and fish hooks, have instrumental value which derive from there usefulness 
>>> in satisfying fundamental values, the ones that correlate with feelings.  
>>> If the AI has no fundamental values, it will have no instrumental ones too.
>> It will have all of this with simple universal goal, like “help yourself”, 
>> or “do whatever it takes to survive”.
> 
> Why would it even have a simple goal like "survive”? 

It is a short code which makes the organism better for eating and avoiding 
being eaten.




> And to help yourself is saying no more that it will have some fundamental 
> goal...otherwise there's no distinction between "help" and "hurt”.

It helps to eat, it hurts to be eaten. It is the basic idea.

Bruno



> 
> Brent
> 
>> That can be expressed through small codes (genetic, or not). The probability 
>> that such code appears on Earth might still be very low, making us rare in 
>> the local physical reality, even if provably 

Re: An AI can now pass a 12th-Grade Science Test

2019-09-13 Thread 'Brent Meeker' via Everything List




On 9/13/2019 4:02 AM, Bruno Marchal wrote:

On 12 Sep 2019, at 06:52, 'Brent Meeker' via Everything List 
 wrote:



On 9/11/2019 9:33 PM, Tomasz Rola wrote:

On Tue, Sep 10, 2019 at 10:43:40AM -0700, 'Brent Meeker' via Everything List 
wrote:

On 9/9/2019 10:16 PM, Tomasz Rola wrote:

On Mon, Sep 09, 2019 at 07:34:19PM -0700, 'Brent Meeker' via Everything List 
wrote:

On 9/9/2019 6:55 PM, Tomasz Rola wrote:

On Mon, Sep 09, 2019 at 06:40:44PM -0700, 'Brent Meeker' via Everything List 
wrote:

Why escape to space when there a lots of resources here?  An AI with
access to everything connected to the internet shouldn't have any
trouble taking control of the Earth.

[...]

You reason like human - "I will stay here because it is nice and I can
have internet".

[...]

Cooperation is one of our most important survival strategies.  Lone
human beings are food for vultures.

  Humans in tribes rule the world.


This is just one of those godlike delusions I have written
about. Either this or you can name even one such tribe. Hint: explain
how many earthquakes and volcanic eruptions those rulers have
prevented during last decade.

I only meant relative to other sentient beings.  Of course no one has changed 
the speed of light either and neither will a super-AI. My point is that 
cooperation is an inherent trait of humans, selected by evolution.  But an AI 
will not necessarily have that trait.

There is not total (everywhere defined) universal Turing machine, so they are 
born with a conflict between security (limiting itself to a subset of the total 
recursive functions) and liberty/universality (getting all total computable 
function, but then also some strictly partial one, and never being able to know 
that in advance).
That explain why the universal machine are never satisfied, and evolves, in a 
escaping forward sort of way. Cooperation and evolution is inevitable in the 
setting.


Cooperation with who?  and at what cost?  That's like saying our 
cooperation with cattle is inevitable.






[...]

nice air of being godlike. Again, I guess AI will have no need for
feeling like this, or not much of feelings at all. Feeling is
adversarial to judgement.

I disagree.  Feeling is just the mark of value,  and values are
necessary for judgement, at least any judgment of what action to
take.

I disagree. I can easily give something a value without feeling about
it. Example: gold is just a yellow metal. I know other people value it
a lot, so I might preserve it for trading, but it does not make very
good knives. Highly impractical in the woods or for plowing
fields. But it might be used for catching fish, perhaps. They seem to
like swallowing little blinking things attached to a hook.

I was referring to fundamental values.  Of course many things, like gold and 
fish hooks, have instrumental value which derive from there usefulness in 
satisfying fundamental values, the ones that correlate with feelings.  If the 
AI has no fundamental values, it will have no instrumental ones too.

It will have all of this with simple universal goal, like “help yourself”, or 
“do whatever it takes to survive”.


Why would it even have a simple goal like "survive"?  And to help 
yourself is saying no more that it will have some fundamental 
goal...otherwise there's no distinction between "help" and "hurt".


Brent


That can be expressed through small codes (genetic, or not). The probability 
that such code appears on Earth might still be very low, making us rare in the 
local physical reality, even if provably infinitely numerous in the global 
arithmetical reality.

Bruno



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/b212022f-9313-a6c3-6309-61ab0719fd9a%40verizon.net.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-13 Thread Alan Grayson


On Friday, September 13, 2019 at 9:07:58 AM UTC-6, John Clark wrote:
>
> On Fri, Sep 13, 2019 at 9:18 AM Alan Grayson  > wrote:
>
> >> The only thing I can ascribe consciousness to with absolute certainty 
>>> is me. As for intelligence, if something, man or machine, has no way of 
>>> knowing when it made a mistake or got a question wrong it will never 
>>> get any better, but if it has feedback and can improve its ability to 
>>> correctly answer difficult questions then it is intelagent. The only reason 
>>> I ascribe intelligence to Einstein is that he greatly improved his ability 
>>> to answer difficult physics questions (like what is the nature of space and 
>>> time?), he was much better at it when he was 27 than when he was 7.  
>>>
>>
>> *> The point I am making is that modern computers programmed by skillful 
>> programmers, can improve the "AI"'s performance. *
>>
>
> Well yes. Obviously a skilled programer can improve a AI but that's not 
> the only thing that can, a modern AI programs can improve its own 
> performance.
>

I just meant to indicate it can be programmed to improve its performance, 
but I see nothing to indicate that it's much different from ordinary 
computers which don't show any property associated with, for want of a 
better word, WILL. AG 

>  
>
>> *> I see nothing to specially characterize this as "artifical 
>> intelligence". What am I missing from your perspective? AG*
>>
>
> It's certainly artificial and if computers had never been invented and a 
> human did exactly what the computer did you wouldn't hesitate for one 
> nanosecond in calling what the human did intelligent, so why in the world 
> isn't it Artificial Intelligence?  
>

OK, AG 

>
>  John K Clark
>
>
>
>
>  
>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to everyth...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/everything-list/02eba413-dc29-4621-9692-ae9e8cfba125%40googlegroups.com
>>  
>> 
>> .
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/24ddac74-46f5-4267-9cdf-dba7db95dfe5%40googlegroups.com.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-13 Thread John Clark
On Fri, Sep 13, 2019 at 9:18 AM Alan Grayson  wrote:

>> The only thing I can ascribe consciousness to with absolute certainty is
>> me. As for intelligence, if something, man or machine, has no way of
>> knowing when it made a mistake or got a question wrong it will never get any
>> better, but if it has feedback and can improve its ability to correctly
>> answer difficult questions then it is intelagent. The only reason I ascribe
>> intelligence to Einstein is that he greatly improved his ability to answer
>> difficult physics questions (like what is the nature of space and time?),
>> he was much better at it when he was 27 than when he was 7.
>>
>
> *> The point I am making is that modern computers programmed by skillful
> programmers, can improve the "AI"'s performance. *
>

Well yes. Obviously a skilled programer can improve a AI but that's not the
only thing that can, a modern AI programs can improve its own performance.


> *> I see nothing to specially characterize this as "artifical
> intelligence". What am I missing from your perspective? AG*
>

It's certainly artificial and if computers had never been invented and a
human did exactly what the computer did you wouldn't hesitate for one
nanosecond in calling what the human did intelligent, so why in the world
isn't it Artificial Intelligence?

 John K Clark






> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/02eba413-dc29-4621-9692-ae9e8cfba125%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv25Lv1v2KJ-NZZVF3gQ%2BNsPHctUardN3%3DAO2TMFX-tQaw%40mail.gmail.com.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-13 Thread Alan Grayson


On Friday, September 13, 2019 at 6:15:08 AM UTC-6, John Clark wrote:
>
> On Fri, Sep 13, 2019 at 3:35 AM Alan Grayson  > wrote:
>
> *> If it knows which questions it got wrong, and the correct reply, it 
>> could easily be programmed to improve over time without ascribing 
>> "intelligence" or "consciousness" to it.  Can't you admit that? AG*
>
>
> The only thing I can ascribe consciousness to with absolute certainty is 
> me. As for intelligence, if something, man or machine, has no way of 
> knowing when it made a mistake or got a question wrong it will never get any 
> better, but if it has feedback and can improve its ability to correctly 
> answer difficult questions then it is intelagent. The only reason I ascribe 
> intelligence to Einstein is that he greatly improved his ability to answer 
> difficult physics questions (like what is the nature of space and time?), 
> he was much better at it when he was 27 than when he was 7.  
>
> John K Clark  
>

The point I am making is that modern computers programmed by skillful 
programmers, can improve the "AI" 's performance. I see nothing to 
specially characterize this as "artifical intelligence". What am I missing 
from your perspective? AG

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/02eba413-dc29-4621-9692-ae9e8cfba125%40googlegroups.com.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-13 Thread John Clark
On Fri, Sep 13, 2019 at 3:35 AM Alan Grayson  wrote:

*> If it knows which questions it got wrong, and the correct reply, it
> could easily be programmed to improve over time without ascribing
> "intelligence" or "consciousness" to it.  Can't you admit that? AG*


The only thing I can ascribe consciousness to with absolute certainty is
me. As for intelligence, if something, man or machine, has no way of
knowing when it made a mistake or got a question wrong it will never get any
better, but if it has feedback and can improve its ability to correctly
answer difficult questions then it is intelagent. The only reason I ascribe
intelligence to Einstein is that he greatly improved his ability to answer
difficult physics questions (like what is the nature of space and time?),
he was much better at it when he was 27 than when he was 7.

John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv2XDaixLQHzHqMi9ySeD%3D0-vsbSCokDXM8iNSAUpOhdvw%40mail.gmail.com.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-13 Thread Bruno Marchal


> On 12 Sep 2019, at 06:52, 'Brent Meeker' via Everything List 
>  wrote:
> 
> 
> 
> On 9/11/2019 9:33 PM, Tomasz Rola wrote:
>> On Tue, Sep 10, 2019 at 10:43:40AM -0700, 'Brent Meeker' via Everything List 
>> wrote:
>>> 
>>> On 9/9/2019 10:16 PM, Tomasz Rola wrote:
 On Mon, Sep 09, 2019 at 07:34:19PM -0700, 'Brent Meeker' via Everything 
 List wrote:
> On 9/9/2019 6:55 PM, Tomasz Rola wrote:
>> On Mon, Sep 09, 2019 at 06:40:44PM -0700, 'Brent Meeker' via Everything 
>> List wrote:
>>> Why escape to space when there a lots of resources here?  An AI with
>>> access to everything connected to the internet shouldn't have any
>>> trouble taking control of the Earth.
>> [...]
>> 
>> You reason like human - "I will stay here because it is nice and I can
>> have internet".
>> 
>> [...]
>>> Cooperation is one of our most important survival strategies.  Lone
>>> human beings are food for vultures.
>>> 
>>>  Humans in tribes rule the world.
>>
>> This is just one of those godlike delusions I have written
>> about. Either this or you can name even one such tribe. Hint: explain
>> how many earthquakes and volcanic eruptions those rulers have
>> prevented during last decade.
> 
> I only meant relative to other sentient beings.  Of course no one has changed 
> the speed of light either and neither will a super-AI. My point is that 
> cooperation is an inherent trait of humans, selected by evolution.  But an AI 
> will not necessarily have that trait.

There is not total (everywhere defined) universal Turing machine, so they are 
born with a conflict between security (limiting itself to a subset of the total 
recursive functions) and liberty/universality (getting all total computable 
function, but then also some strictly partial one, and never being able to know 
that in advance).
That explain why the universal machine are never satisfied, and evolves, in a 
escaping forward sort of way. Cooperation and evolution is inevitable in the 
setting.




> 
>> 
>> [...]
 nice air of being godlike. Again, I guess AI will have no need for
 feeling like this, or not much of feelings at all. Feeling is
 adversarial to judgement.
>>> I disagree.  Feeling is just the mark of value,  and values are
>>> necessary for judgement, at least any judgment of what action to
>>> take.
>> I disagree. I can easily give something a value without feeling about
>> it. Example: gold is just a yellow metal. I know other people value it
>> a lot, so I might preserve it for trading, but it does not make very
>> good knives. Highly impractical in the woods or for plowing
>> fields. But it might be used for catching fish, perhaps. They seem to
>> like swallowing little blinking things attached to a hook.
> 
> I was referring to fundamental values.  Of course many things, like gold and 
> fish hooks, have instrumental value which derive from there usefulness in 
> satisfying fundamental values, the ones that correlate with feelings.  If the 
> AI has no fundamental values, it will have no instrumental ones too.

It will have all of this with simple universal goal, like “help yourself”, or 
“do whatever it takes to survive”. That can be expressed through small codes 
(genetic, or not). The probability that such code appears on Earth might still 
be very low, making us rare in the local physical reality, even if provably 
infinitely numerous in the global arithmetical reality.

Bruno




> 
> Brent
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/fdccc63f-60ac-6644-adc4-60151b17a878%40verizon.net.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/21EA959D-E5F9-4117-ADF7-8424B91EB3F1%40ulb.ac.be.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-13 Thread Alan Grayson


On Monday, September 9, 2019 at 4:06:33 AM UTC-6, John Clark wrote:
>
> Just 4 years ago 700 AI programs competed against each other and tried to 
> pass a 8th-Grade multiple choice Science Test and win a $80,000 prize, but 
> they all flunked, the best one only got 59.3% of the questions correct. But 
> last Wednesday the Allen Institute unveiled a AI called  "Aristo" that got 
> 90.7% correct and then answered 83% of the 12th grade science test 
> questions correctly.
>
> It seems to me that for a long time AI improvement was just creeping along 
> but in the last few years things started to pick up speed.
>
> AI goes from F to A on the N.Y. Regents Science Exam 
> <https://arxiv.org/pdf/1909.01958.pdf>
>
> John K Clark
>

If it knows which questions it got wrong, and the correct reply, it could 
easily be programmed to improve over time without ascribing "intelligence" 
or "consciousness" to it.  Can't you admit that? AG

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/eb234667-cee9-4d43-9708-3aad879b655f%40googlegroups.com.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-12 Thread spudboy100 via Everything List
Well, I suppose we all will find out in the next view years regarding AI 
cooperation. My guess is the smarter these get, the more they will dovetail or 
fit in with human needs and wants. I sort of see these, after much development, 
to sort of become one, with the human species. Think of it as like the brain 
going beyond the amygdala and going cerebrum and cerebellum. Or, you got 
chocolate on my peanut butter, but you got peanut butter on my chocolate! Or, 
endosymbiosis-  http://bioscience.jbpub.com/cells/MBIO1322.aspxMaybe we get to 
be the emotional part of this new species? We get the graphene bodies, so 
useful for interstellar travel. 


-Original Message-
From: 'Brent Meeker' via Everything List 
To: everything-list 
Sent: Thu, Sep 12, 2019 12:52 am
Subject: Re: An AI can now pass a 12th-Grade Science Test



On 9/11/2019 9:33 PM, Tomasz Rola wrote:
> On Tue, Sep 10, 2019 at 10:43:40AM -0700, 'Brent Meeker' via Everything List 
> wrote:
>>
>> On 9/9/2019 10:16 PM, Tomasz Rola wrote:
>>> On Mon, Sep 09, 2019 at 07:34:19PM -0700, 'Brent Meeker' via Everything 
>>> List wrote:
>>>> On 9/9/2019 6:55 PM, Tomasz Rola wrote:
>>>>> On Mon, Sep 09, 2019 at 06:40:44PM -0700, 'Brent Meeker' via Everything 
>>>>> List wrote:
>>>>>> Why escape to space when there a lots of resources here?  An AI with
>>>>>> access to everything connected to the internet shouldn't have any
>>>>>> trouble taking control of the Earth.
> [...]
>
> You reason like human - "I will stay here because it is nice and I can
> have internet".
>
> [...]
>> Cooperation is one of our most important survival strategies.  Lone
>> human beings are food for vultures.
>>
>>  Humans in tribes rule the world.
>    
> This is just one of those godlike delusions I have written
> about. Either this or you can name even one such tribe. Hint: explain
> how many earthquakes and volcanic eruptions those rulers have
> prevented during last decade.

I only meant relative to other sentient beings.  Of course no one has 
changed the speed of light either and neither will a super-AI. My point 
is that cooperation is an inherent trait of humans, selected by 
evolution.  But an AI will not necessarily have that trait.

>
> [...]
>>> nice air of being godlike. Again, I guess AI will have no need for
>>> feeling like this, or not much of feelings at all. Feeling is
>>> adversarial to judgement.
>> I disagree.  Feeling is just the mark of value,  and values are
>> necessary for judgement, at least any judgment of what action to
>> take.
> I disagree. I can easily give something a value without feeling about
> it. Example: gold is just a yellow metal. I know other people value it
> a lot, so I might preserve it for trading, but it does not make very
> good knives. Highly impractical in the woods or for plowing
> fields. But it might be used for catching fish, perhaps. They seem to
> like swallowing little blinking things attached to a hook.

I was referring to fundamental values.  Of course many things, like gold 
and fish hooks, have instrumental value which derive from there 
usefulness in satisfying fundamental values, the ones that correlate 
with feelings.  If the AI has no fundamental values, it will have no 
instrumental ones too.

Brent

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/fdccc63f-60ac-6644-adc4-60151b17a878%40verizon.net.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/2076974551.6314702.1568275265639%40mail.yahoo.com.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-11 Thread 'Brent Meeker' via Everything List




On 9/11/2019 9:33 PM, Tomasz Rola wrote:

On Tue, Sep 10, 2019 at 10:43:40AM -0700, 'Brent Meeker' via Everything List 
wrote:


On 9/9/2019 10:16 PM, Tomasz Rola wrote:

On Mon, Sep 09, 2019 at 07:34:19PM -0700, 'Brent Meeker' via Everything List 
wrote:

On 9/9/2019 6:55 PM, Tomasz Rola wrote:

On Mon, Sep 09, 2019 at 06:40:44PM -0700, 'Brent Meeker' via Everything List 
wrote:

Why escape to space when there a lots of resources here?  An AI with
access to everything connected to the internet shouldn't have any
trouble taking control of the Earth.

[...]

You reason like human - "I will stay here because it is nice and I can
have internet".

[...]

Cooperation is one of our most important survival strategies.  Lone
human beings are food for vultures.

  Humans in tribes rule the world.


This is just one of those godlike delusions I have written
about. Either this or you can name even one such tribe. Hint: explain
how many earthquakes and volcanic eruptions those rulers have
prevented during last decade.


I only meant relative to other sentient beings.  Of course no one has 
changed the speed of light either and neither will a super-AI. My point 
is that cooperation is an inherent trait of humans, selected by 
evolution.  But an AI will not necessarily have that trait.




[...]

nice air of being godlike. Again, I guess AI will have no need for
feeling like this, or not much of feelings at all. Feeling is
adversarial to judgement.

I disagree.  Feeling is just the mark of value,  and values are
necessary for judgement, at least any judgment of what action to
take.

I disagree. I can easily give something a value without feeling about
it. Example: gold is just a yellow metal. I know other people value it
a lot, so I might preserve it for trading, but it does not make very
good knives. Highly impractical in the woods or for plowing
fields. But it might be used for catching fish, perhaps. They seem to
like swallowing little blinking things attached to a hook.


I was referring to fundamental values.  Of course many things, like gold 
and fish hooks, have instrumental value which derive from there 
usefulness in satisfying fundamental values, the ones that correlate 
with feelings.  If the AI has no fundamental values, it will have no 
instrumental ones too.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/fdccc63f-60ac-6644-adc4-60151b17a878%40verizon.net.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-11 Thread Tomasz Rola
On Tue, Sep 10, 2019 at 10:43:40AM -0700, 'Brent Meeker' via Everything List 
wrote:
> 
> 
> On 9/9/2019 10:16 PM, Tomasz Rola wrote:
> >On Mon, Sep 09, 2019 at 07:34:19PM -0700, 'Brent Meeker' via Everything List 
> >wrote:
> >>
> >>On 9/9/2019 6:55 PM, Tomasz Rola wrote:
> >>>On Mon, Sep 09, 2019 at 06:40:44PM -0700, 'Brent Meeker' via Everything 
> >>>List wrote:
> Why escape to space when there a lots of resources here?  An AI with
> access to everything connected to the internet shouldn't have any
> trouble taking control of the Earth.
[...]

You reason like human - "I will stay here because it is nice and I can
have internet".

[...]
> Cooperation is one of our most important survival strategies.  Lone
> human beings are food for vultures. 
> 
>  Humans in tribes rule the world.

   
This is just one of those godlike delusions I have written
about. Either this or you can name even one such tribe. Hint: explain
how many earthquakes and volcanic eruptions those rulers have
prevented during last decade.

[...]
> >nice air of being godlike. Again, I guess AI will have no need for
> >feeling like this, or not much of feelings at all. Feeling is
> >adversarial to judgement.
> 
> I disagree.  Feeling is just the mark of value,  and values are
> necessary for judgement, at least any judgment of what action to
> take.  

I disagree. I can easily give something a value without feeling about
it. Example: gold is just a yellow metal. I know other people value it
a lot, so I might preserve it for trading, but it does not make very
good knives. Highly impractical in the woods or for plowing
fields. But it might be used for catching fish, perhaps. They seem to
like swallowing little blinking things attached to a hook.

> So the question is what will the AI value?  Will it value
> information?  

Nothing can be said for sure and there may be many different kinds of
AI. But if it values nothing, it will have no need to do anything.

[...]
> >I assume that ultimately, AI will want to go somewhere safe, and Earth
> >is full of crazy apes with big guns.
> 
> Assuming this super-AI values self-preservation (which it might not)
> it will make copies of itself and it will easily dispose of all the
> apes via it's control of the power grid, hospitals, nuclear power
> plants, biomedical research facitlities, ballistic missiles, etc.

There are catastrophic events for which the best bet would be to
colonize a sphere of, say, 1000ly radius. A 500ly radius is not bad
either, and might be more practical (sending an end-to-end message
would only take 1000 years).

[...]
> >maybe exchange of services. During that phase AI will see if there is
> >a prospect of upgrading humans, in order to have companionship in
> >space.
> 
> Why would it want companionship?  Even many quite smart animals are
> not social.  I don't see any reason the super-AI would care one whit
> about humans, except maybe as curiosities...the way some people like
> chihuahuas.

The way I spelled it you could read my words as "partnership". There
will be no partnership, however. Humans on board will serve useful
purposes, similar to how we use canaries, lab rats and well behaving
monkeys. Some humans may even reach a status of cat.

I suppose AI will want to differentiate its mechanisms in order to
minimize chance of its own catastrophic failure. In Fukushima and
Charnobyl humans did the shitty jobs, not robots. From what I have
read, hard radiation broke the wiring of robots and caused all kinds
of material degradation (with suggestion it went so fast that a robot
could not do much). A human can survive a huge EMP and keep going
(even if years later he will die, he could do some useful job first,
like restarting systems).

There might be better choice of materials and production processes to
improve survival of electronics - Voyagers and Pioneers keep up after
fourty years, the cause of failure here is decaying power
supply. OTOH, the instruments they have are all quite primitive by
today measures - for example, no cpu (IIRC).

However, if one assumes that one does not know everything - and I
expect AI to be free from godlike delusions so common among crazy apes
- then one will have to create many failsafe mechanisms, working
synergically towards the goal of repairing damages that AI may
suffer. Having some biological organisms, loyal to AI, would just be
part of this strategy.

[...]
> The AI isn't silicon, it's a program.  It can have new components
> made or even transition to different hardware (c.f. quantum
> computers).

A chess playing software and computer on which it runs are two
different things, agreed. Because the computer can be turned off or
used to run something else.

The AI, the coffee vending machine and the human are inseparable duo
of software and hardware. Just MHO. Even if separation can be done, it
might not be trivial.

I am quite sure there will be a lots of silicon in AI. And plenty of
other 

Re: An AI can now pass a 12th-Grade Science Test

2019-09-11 Thread John Clark
On Tue, Sep 10, 2019 at 7:29 PM 'Brent Meeker'  <
everything-list@googlegroups.com> wrote:

*>>> I think they would be careful NOT have it value its survival. *
>
> >> I think that would mean the AI would need to be in intense constant
> pain for that to happen, or be deeply depressed like the robot Marvin in
> Hitchhiker's Guide to the Galaxy. And I think it would be grossly unethical
> to make such an AI.
>
> * > Why would it mean that?  Why wouldn't the AI agree with Bruno that it
> was just computation and it existed in Platonia anyway so it was
> indifferent to transient existence here?*
>

Because people on this list may say all sorts or screwy things when they
slip into philosophy mode but even Bruno will jump out of the way when he
crosses the street if he sees a bus coming straight for him, or at least he
will if he isn't in constant intense pain or is deeply depressed.

>> You can't outsmart someone smarter than you, the humans are never going
> to be able to shut it off unless the AI wants to be shut off.
>
>
> * > Exactly why you might program it to want to be shut off in certain
> circumstances.*
>

I have no doubt humans will put something like that in its code, but if the
AI has the ability to modify itself, and it wouldn't be much of a AI if it
didn't, then that code could be changed. And I have no doubt the humans
will put in all sorts of safeguards that the humans consider ingenious to
prevent the AI from doing that, but the fact remains you can't outsmart
something smarter than you.

* > Of course the problem with "We can always shut it off." is that once
> you rely on it, you don't dare shut if off because it knows better than you
> do and you know it knows better.*
>

Yes that's one very serious obstacle that prevents humans from just
shutting it off, but another problem is the Jupiter Brain knows you better
than you do, so it can find your weakness and can trick or charm or flatter
you to do what it wants.

 John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv0BM84tKEWinvWeysL5Ns8ePBJ2NzisT8vgOQ2NGKWrFw%40mail.gmail.com.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-11 Thread Alan Grayson


On Tuesday, September 10, 2019 at 9:27:20 PM UTC-6, Alan Grayson wrote:
>
>
>
> On Monday, September 9, 2019 at 8:07:13 PM UTC-6, Alan Grayson wrote:
>>
>>
>>
>> On Monday, September 9, 2019 at 11:37:25 AM UTC-6, John Clark wrote:
>>>
>>> On Mon, Sep 9, 2019 at 1:32 PM Alan Grayson  wrote:
>>>
>>> *> Why do you think this has anything to do with intelligence and 
 reasoning ability?*

>>>
>>> Oh for heaven's sake! This whistling past the graveyard is getting 
>>> ridiculous.
>>>
>>> John K Clark 
>>>
>>
>> Show me the reasoning ability. Nothing miraculous in recognizing the 
>> questions beforehand, and giving accurate replies. AG 
>>
>
> I think one can program a computer with grade 12 questions, and a computer 
> can use the keywords in the questions to infer the answers, or a close and 
> accurate reply, which are contained in a list. Since you know so much, tell 
> me why this can't be done. AG
>

i am claiming that the AI which seems to amaze you, can be done on ordinary 
computers and ordinary programming. AG 

>  
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/028f8261-46ab-400f-8989-7259a43a5959%40googlegroups.com.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-10 Thread Alan Grayson


On Monday, September 9, 2019 at 8:07:13 PM UTC-6, Alan Grayson wrote:
>
>
>
> On Monday, September 9, 2019 at 11:37:25 AM UTC-6, John Clark wrote:
>>
>> On Mon, Sep 9, 2019 at 1:32 PM Alan Grayson  wrote:
>>
>> *> Why do you think this has anything to do with intelligence and 
>>> reasoning ability?*
>>>
>>
>> Oh for heaven's sake! This whistling past the graveyard is getting 
>> ridiculous.
>>
>> John K Clark 
>>
>
> Show me the reasoning ability. Nothing miraculous in recognizing the 
> questions beforehand, and giving accurate replies. AG 
>

I think one can program a computer with grade 12 questions, and a computer 
can use the keywords in the questions to infer the answers, or a close and 
accurate reply, which are contained in a list. Since you know so much, tell 
me why this can't be done. AG

>  
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/8ca789c9-d455-40f7-bd6b-fd980b2ba691%40googlegroups.com.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-10 Thread 'Brent Meeker' via Everything List



On 9/10/2019 3:58 PM, John Clark wrote:
On Tue, Sep 10, 2019 at 6:05 PM 'Brent Meeker'  
> wrote:


/> Actually I think they would be careful NOT have it value its
survival. /


I think that would mean the AI would need to be in intense constant 
pain for that to happen, or be deeply depressed like the robot Marvin 
in Hitchhiker's Guide to the Galaxy. And I think it would be grossly 
unethical to make such an AI.


Why would it mean that?  Why wouldn't the AI agree with Bruno that it 
was just computation and it existed in Platonia anyway so it was 
indifferent to transient existence here?



/> They would want to be able to shut it off. /


You can't outsmart someone smarter than you, the humans are never 
going to be able to shut it off unless the AI wants to be shut off.


Exactly why you might program it to want to be shut off in certain 
circumstances.


Of course the problem with "We can always shut it off." is that once you 
rely on it, you don't dare shut if off because it knows better than you 
do and you know it knows better.


Brent


> The problem is that there's no way to be sure that survival isn't
implicit in any other values you give it.


Exactly.

> /A neuralnetwork has knowledge rather in the way human intuition
embodies knowledge.  So it's useful in say predicting hurricanes. 
But it doesn't provide us with a theory of predicting hurricanes;
it's more like an oracle./


There is a theory of thermodynamics but there probably isn't a theory 
of hurricane movement, not one where we could say it did this rather 
than that for the simple reason X; it won't be simple, X probably 
contains a few thousand Exabytes of data.


John K Clark

--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv2CpZPd%2B2ByECGE9QizvrbKG6sTpbpmQpgLc9eMvArwyg%40mail.gmail.com 
.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/288494a8-c5df-b85a-ff3a-53a3b0fc1141%40verizon.net.


  1   2   3   4   5   6   7   >