Re: AlphaZero

2022-02-08 Thread Terren Suydam
On Tue, Feb 8, 2022 at 6:58 AM John Clark  wrote:

> On Mon, Feb 7, 2022 at 6:34 PM Terren Suydam 
> wrote:
>
> *> The problem with the real world of human enterprise (i.e. the domain in
>> which talk of replacing human programmers is relevant) is that AIs
>> currently cannot even be taught what the rules are*
>>
>
> I don't know what you mean by that because silicon-based AlphaCode must
> know something about code writing given that the very first time it was
> released into the wild, despite the difficulties involved in becoming a
> good program writer as you correctly point out,  AlphaCode nevertheless
> managed to write better code than half the human programmers who used older
> meat-based technology. And as time goes by AlphaCode will get better but
> humans will not get smarter.
>
>
I was talking about the rules of human enterprise, not coding.

Terren


> John K ClarkSee what's on my new list at  Extropolis
> 
> smt
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv0yzm9vvZcWPFdZV4iN%3DW%3DbFu4gmqQasBXmLbQXjBe2-Q%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA8w48XEgcD_QZKzsrxPqx24-Cwd1y_%3DfOp42%3DPqOuwaOw%40mail.gmail.com.


Re: AlphaZero

2022-02-08 Thread John Clark
On Mon, Feb 7, 2022 at 6:34 PM Terren Suydam 
wrote:

*> The problem with the real world of human enterprise (i.e. the domain in
> which talk of replacing human programmers is relevant) is that AIs
> currently cannot even be taught what the rules are*
>

I don't know what you mean by that because silicon-based AlphaCode must
know something about code writing given that the very first time it was
released into the wild, despite the difficulties involved in becoming a
good program writer as you correctly point out,  AlphaCode nevertheless
managed to write better code than half the human programmers who used older
meat-based technology. And as time goes by AlphaCode will get better but
humans will not get smarter.

John K ClarkSee what's on my new list at  Extropolis

smt

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv0yzm9vvZcWPFdZV4iN%3DW%3DbFu4gmqQasBXmLbQXjBe2-Q%40mail.gmail.com.


Re: AlphaZero

2022-02-07 Thread Brent Meeker



On 2/7/2022 3:33 PM, Terren Suydam wrote:



On Mon, Feb 7, 2022 at 5:25 PM John Clark  wrote:

On Mon, Feb 7, 2022 at 5:12 PM Terren Suydam
 wrote:

>> When you learned how to code did you have to reinvent all
the programming languages and techniques and do it all on
your own with no help from teachers or friends or books or
fellow coders? Did you have to rediscover the wheel?


/> Did AlphaZero get any help from humans?/


No, but then writing good code is fundamentally more difficult
than playing good chess. The rules of chess can be learned in five
minutes, learning the rules for writing computer code would take
considerably longer.


That's exactly my point. Chess represents a narrow enough domain, with 
a limited enough set of operations, that makes it possible for an AI 
to teach itself, at least since AlphaZero anyway (and this itself was 
a huge achievement).


The problem with the real world of human enterprise (i.e. the domain 
in which talk of replacing human programmers is relevant) is that AIs 
currently cannot even be taught what the rules are, much less teach 
themselves to improve within the constraints of those rules. One day 
that will change, but we're not there yet. I say we're not even close.


The question is how much of human intelligence is inherent/genetic that 
has evolved over half a million years or longer (they didn't start from 
zero) vs how much a human programmer learns by example/experience/etc.  
I don't think the latter is so great that an AI can't compare pretty 
quickly if not now.  The former, "hard-wired" part hasn't really been 
tackled by neural network AIs. In animals and humans it evolved over a 
long time and it was shaped by the environment and the sensors 
available.  From the machine learning stand point it's like making a 
robot with sensors and actuators and neural nets, and then programming 
in its goal, "Go forth and multiply."  Its randomized neural nets are 
short a few million years of training by nature.  But given the 
electronic v. wet speed difference plus some starting point better than 
just randomized I expect it will get there in a decade or less.


Brent



Terren

John K Clark    See what's on my new list at Extropolis
<https://groups.google.com/g/extropolis>
mtl


rew
-- 
You received this message because you are subscribed to the Google

Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit

https://groups.google.com/d/msgid/everything-list/CAJPayv3xuYCcrEPd08KVzD1MominXKxq7O15EskTkL489MQ%2B5A%40mail.gmail.com

<https://groups.google.com/d/msgid/everything-list/CAJPayv3xuYCcrEPd08KVzD1MominXKxq7O15EskTkL489MQ%2B5A%40mail.gmail.com?utm_medium=email&utm_source=footer>.

--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA-Rfrn_itZZFQNBYh3RsXs%2Bdv3nGAf7dr3X8yNo3KdAQQ%40mail.gmail.com 
<https://groups.google.com/d/msgid/everything-list/CAMy3ZA-Rfrn_itZZFQNBYh3RsXs%2Bdv3nGAf7dr3X8yNo3KdAQQ%40mail.gmail.com?utm_medium=email&utm_source=footer>.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/f8c81e63-30a5-3232-674f-8ba61176324b%40gmail.com.


Re: AlphaZero

2022-02-07 Thread Terren Suydam
On Mon, Feb 7, 2022 at 5:25 PM John Clark  wrote:

> On Mon, Feb 7, 2022 at 5:12 PM Terren Suydam 
> wrote:
>
> >> When you learned how to code did you have to reinvent all the
>>> programming languages and techniques and do it all on your own with no help
>>> from teachers or friends or books or fellow coders? Did you have to
>>> rediscover the wheel?
>>>
>>
>> *> Did AlphaZero get any help from humans?*
>>
>
> No, but then writing good code is fundamentally more difficult than
> playing good chess. The rules of chess can be learned in five minutes,
> learning the rules for writing computer code would take considerably longer.
>

That's exactly my point. Chess represents a narrow enough domain, with a
limited enough set of operations, that makes it possible for an AI to teach
itself, at least since AlphaZero anyway (and this itself was a huge
achievement).

The problem with the real world of human enterprise (i.e. the domain in
which talk of replacing human programmers is relevant) is that AIs
currently cannot even be taught what the rules are, much less teach
themselves to improve within the constraints of those rules. One day that
will change, but we're not there yet. I say we're not even close.

Terren


> John K ClarkSee what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
> mtl
>
>
> rew
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv3xuYCcrEPd08KVzD1MominXKxq7O15EskTkL489MQ%2B5A%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv3xuYCcrEPd08KVzD1MominXKxq7O15EskTkL489MQ%2B5A%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA-Rfrn_itZZFQNBYh3RsXs%2Bdv3nGAf7dr3X8yNo3KdAQQ%40mail.gmail.com.


Re: AlphaZero

2022-02-07 Thread John Clark
On Mon, Feb 7, 2022 at 5:12 PM Terren Suydam 
wrote:

>> When you learned how to code did you have to reinvent all the
>> programming languages and techniques and do it all on your own with no help
>> from teachers or friends or books or fellow coders? Did you have to
>> rediscover the wheel?
>>
>
> *> Did AlphaZero get any help from humans?*
>

No, but then writing good code is fundamentally more difficult than playing
good chess. The rules of chess can be learned in five minutes, learning the
rules for writing computer code would take considerably longer.

John K ClarkSee what's on my new list at  Extropolis
<https://groups.google.com/g/extropolis>
mtl


rew

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv3xuYCcrEPd08KVzD1MominXKxq7O15EskTkL489MQ%2B5A%40mail.gmail.com.


Re: AlphaZero

2022-02-07 Thread Terren Suydam
On Mon, Feb 7, 2022 at 4:24 PM John Clark  wrote:

> On Mon, Feb 7, 2022 at 2:34 PM Terren Suydam 
> wrote:
>
> >> Terren, we both know that's not the way AlphaCode works, it's a bit
>>> more complicated than that.
>>
>>
>
>>
>> *That is how it works. All I left out is the part about how it generates
>> the guesses*
>>
>
> Besides that Mrs. Lincoln how did you like the play?
>

LOL good one. My point is that it must generate millions of guesses just to
get a handful that actually work. And I'll readily admit that the fact that
it only takes a million (or whatever) tries to get something that works is
actually damned impressive, and makes AlphaCode worthy of the attention
it's getting.

We can argue about how much improvement in the guesswork is possible - you
might argue that in the future it will only need 1000 guesses, or fewer. My
argument is that you won't get there without a fundamentally different
strategy.


>
>> > *I predict the current AlphaCode strategy will never lead to putting
>> engineers out of work, or self-improvement of the type that would lead to
>> the singularity. This is not like the AlphaZero strategy in which it
>> teaches itself the game. If they come out with a new code-writing AI that
>> teaches itself to code, then I will happily change my tune.*
>>
>
> When you learned how to code did you have to reinvent all the programming
> languages and techniques and do it all on your own with no help from
> teachers or friends or books or fellow coders? Did you have to rediscover
> the wheel?
>

Did AlphaZero get any help from humans?

Terren

John K ClarkSee what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
>
rew


> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv09hZ4%3DgRkihHOhGd5tKAEd%2BUWS0PnWTC5wMiyXk3V6RQ%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv09hZ4%3DgRkihHOhGd5tKAEd%2BUWS0PnWTC5wMiyXk3V6RQ%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA9-3a87YyM6e746xxAfbm8iTdnoEfssyC9f6hMku3cW7g%40mail.gmail.com.


Re: AlphaZero

2022-02-07 Thread John Clark
On Mon, Feb 7, 2022 at 2:34 PM Terren Suydam 
wrote:

>> Terren, we both know that's not the way AlphaCode works, it's a bit more
>> complicated than that.
>
>

   >
> *That is how it works. All I left out is the part about how it generates
> the guesses*
>

Besides that Mrs. Lincoln how did you like the play?


> > *I predict the current AlphaCode strategy will never lead to putting
> engineers out of work, or self-improvement of the type that would lead to
> the singularity. This is not like the AlphaZero strategy in which it
> teaches itself the game. If they come out with a new code-writing AI that
> teaches itself to code, then I will happily change my tune.*
>

When you learned how to code did you have to reinvent all the programming
languages and techniques and do it all on your own with no help from
teachers or friends or books or fellow coders? Did you have to rediscover
the wheel?

A lot of problems are solvable in principle but not in practice because
>> they take too long, and time is money. The Fugaku Supercomputer in Japan,
>> the fastest in the world, cost one billion dollars, so I figure if you
>> could make its operating system run a little more efficiently so the
>> machine was just 10% faster you could sell that program for about $100
>> million.
>
>
> *> The $1B price tag isn't because of the OS, which will only use a tiny
> fraction of the available computing power. *
>

The Fugaku Supercomputer has 7,630,848 cores and like all massively
parallel computers most of the time most of those cores are not doing
anything because they're waiting to receive some bit of information they
need before they can start calculating; there's a big difference between
the peak performance of a supercomputer and the actual performance it
demonstrates when solving most problems. Any OS that can assign resources
more efficiently so those inactive periods are reduced, even slightly,
would have a big effect on the overall performance of the machine in solving
real world problems, not running benchmark FLOP measuring programs.

John K ClarkSee what's on my new list at  Extropolis
<https://groups.google.com/g/extropolis>
rew


>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv09hZ4%3DgRkihHOhGd5tKAEd%2BUWS0PnWTC5wMiyXk3V6RQ%40mail.gmail.com.


Re: AlphaZero

2022-02-07 Thread John Clark
Scott Aaronson is one of the world's top experts on quantum computers, but
he also knows something about conventional computers and AI, yesterday he
gave his opinion on AlphaCode. After listing some of the programs
limitations he concludes with:


*"Forget all that. Judged against where AI was 20-25 years ago, when I was
a student, a dog is now holding meaningful conversations in English. And
people are complaining that the dog isn’t a very eloquent orator, that it
often makes grammatical errors and has to start again, that it took heroic
effort to train it, and that it’s unclear how much the dog really
understands.It’s not obvious how you go from solving programming contest
problems to conquering the human race or whatever, but I feel pretty
confident that we’ve now entered a world where “programming” will look
different."*

AlphaCode as a dog speaking mediocre English 


John K ClarkSee what's on my new list at  Extropolis

aqq


fsj

>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv3MzG_%3DaiU0Fo%3D4m6pZXL1KQiB-QuRFwOHwqj1ey2ZsMQ%40mail.gmail.com.


Re: AlphaZero

2022-02-06 Thread John Clark
On Sat, Feb 5, 2022 at 7:15 PM Terren Suydam 
wrote:


> *> Let's take a more accessible analogy. Let's say the problem description
> is: "Here's a locked door. Devise a key to unlock the door." *
>
> *A simplified analog to what Alphacode does is the following:*
>
>-
> *generate millions of different keys of various shapes and sizes. *
>-
> *for each key, try to unlock the door with it *
>- *if it doesn't work, toss it*
>
>
A better analogy would be a lockpicker that breaks down a complex task into
a series of much simpler actions. Insert two lockpicks into the keyhole one
to provide torque and the other to manipulate the pins. Find the first pin
and see if it produces any resistance, if it doesn't then ignore it for the
time being and go onto the second pin, if it shows resistance then increase
the pressure until you hear a click or feel a slight rotation of the lock.
Continue this procedure until you get to the last pin (most locks have 6
pins and few have more than 8) then go back to pin one and go down the line
again. After three or four passes down the line you should be able to fully
rotate the lock and open the door.


>
> *> If you think AlphaCode and Shakespeare have anything in common, then I
> don't think your assertions about AI are worth much.*
>

I see no evidence of a secret sauce that only human geniuses have, the
difference between a Turing Machine named William Shakespeare and a Turing
Machine named AlphaCode is a difference in degree and not of kind.


> > *If you think a brute-force "generate a million guesses until one
> works" ...*


Terren, we both know that's not the way AlphaCode works, it's a bit more
complicated than that.


> *> ... strategy has the same understanding as an algorithm that employs a
> detailed model of the domain and uses that model to generate a reasoned
> solution, regardless of the results, then it's you that is employing the
> unconventional meaning of "understand".*
>

You can insult that poor program all you want but it doesn't change the
fact that the very first time it was tried it managed to write better code
in that competition than half the humans who make their living by writing
code. And do you seriously doubt that AlphaCode will get better in the next
few years, a lot better? I humbly suggest you stop worrying about how a
computer can do what it does and start looking at what it is actually able
to do; if I can come up with the correct answer then from your point of
view it shouldn't matter how I was able to do it because it's still the
correct answer.

*> In the real world, you usually don't get to try something a million
> times until something works.*
>

I think that's probably untrue. I have a hunch that when you're trying to
solve a problem your brain proposes many trillions of different neural
patterns and nearly all of them are rejected because they don't work very
well, and most of the time you are not consciously aware of any of them
because you only have a finite memory capacity and there is little
evolutionary value in remembering them because they don't work, although
some of them on the borderline between acceptance and rejection may leave a
vague imprint. But a brain is fallible and sometimes a neural pattern that
would've solved the problem is rejected, perhaps smart people have a more
reliable rejection mechanism then stupid people or are able to retrieve
that vague imprint and reconsider it. Einstein's notebooks show that about
a year before he finished General Relativity and after he had been working
on it for nearly a decade he came up with the correct solution but then for
some reason he rejected it, after another year of fruitless work he
remembered it and decided to take a second look.


> >> You're talking about what would be more useful, I was talking about
>> what would be more complex. In general finding the smallest and fastest
>> program that can accomplish a given task is infinitely complex, that is to
>> say in general it's impossible to find the smallest program and prove it's
>> the smallest program.  Code optimization is very far from a trivial
>> problem.
>>
>
> *> I'm surprised you're focusing on the less useful direction to go in.*
>

I focused on that example to demonstrate that what AlphaCode did was not
trivial, far from it


> > *If anything, your thinking tends to be very pragmatic.*
>

Thank you.


> *>Who cares if you can squeeze a few extra milliseconds out of an
> algorithm,*
>

A lot of problems are solvable in principle but not in practice because
they take too long, and time is money. The Fugaku Supercomputer in Japan,
the fastest in the world, cost one billion dollars, so I figure if you
could make its operating system run a little more efficiently so the
machine was just 10% faster you could sell that program for about $100
million.
John K ClarkSee what's on my new list at  Extropolis

fsj

-- 
You received this message because you are subscrib

Re: AlphaZero

2022-02-05 Thread Terren Suydam
On Sat, Feb 5, 2022 at 6:24 PM Brent Meeker  wrote:

>
> AlphaCode is not capable of reading code. It's a clever version of monkeys
> typing on typewriters until they bang out a Shakespeare play. Still counts
> as AI, but cannot be said to understand code.
>
>
> What does it mean "to read code"?  It can execute code, from github
> apparently so it must read code well enough to execute it.  What more does
> it need to read?  You say it cannot be said to understand code.  Can you
> specify what would show it understands the code?
>
>
The github code is used to train a neural network that maps natural
language to code, and this neural network is used to generate candidate
solutions based on the natural language of the problem description. If you
want to say that represents a form of understanding, ok. But I would still
push back that it could "read code".


> I think you mean it tells you story about how this part to does that and
> this other part does something else, etc.  But that's just catering to your
> weak human brain that can't just "see" that the code solves the problem.
> The problem is that there is no species independent meaning of "understand"
> except "make it work".  AlphaCode doesn't understand code like you do,
> because it doesn't think like you do and doesn't have the context you do.
>

There are times when I can read code and understand it, and times when I
can't. When I can understand it, I can reason about what it's doing; I can
find and fix bugs in it; I can potentially optimize it. I can see if this
code is useful for other situations. And yes, I can tell you a story about
what it's doing. AlphaCode is doing none of those things, because it's not
built to.


>
> Brent
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA82SS%2B6A0jPd%3Dm08KP4SGOMrkpx%3DKTjYx560dtkjkwi5w%40mail.gmail.com.


Re: AlphaZero

2022-02-05 Thread Terren Suydam
On Sat, Feb 5, 2022 at 5:18 PM John Clark  wrote:

> On Sat, Feb 5, 2022 at 3:51 PM Terren Suydam 
> wrote:
>
> * > I dug a little deeper into how AlphaCode works. It generates millions
>> of candidate solutions using a model trained on github code. It then
>> filters out 99% of those candidate solutions by running them against test
>> cases provided in the problem description and removing the ones that fail.
>> It then uses a different technique to whittle down the candidate solutions
>> from several thousand to just ten. *[...] AlphaCode is not capable of
>> reading code.
>>
>
> How on earth can it filter out 99% of the code because it is bad code if
> it cannot read code? Closer to home, how could somebody on this list tell
> the difference between a post they like and a post they didn't like if they
> couldn't read English?
>

Let's take a more accessible analogy. Let's say the problem description is:
"Here's a locked door. Devise a key to unlock the door."

A simplified analog to what Alphacode does is the following:

   - generate millions of different keys of various shapes and sizes.
   - for each key, try to unlock the door with it
   - if it doesn't work, toss it

In order to say that the key-generating AI understands keys and locks,
you'd have to believe that a strategy that involves creating millions of
guesses until one works entails some kind of understanding.

To your point that AlphaCode must have the ability to read code if it knows
how to toss incorrect candidates, that's like saying that the key-generator
must understand locks because it knows how to test if a key unlocks the
door.


>
>
>> * > Nobody, neither the AI nor the humans running AlphaCode, know if the
>> 10 solutions picked are correct.*
>>
>
> As Alan Turing said  "*If a machine is expected to be infallible, it
> cannot also be intelligent*."
>
> > It's a clever version of monkeys typing on typewriters until they bang
>> out a Shakespeare play. Still counts as AI,
>>
>
>   A clever version indeed!! In fact I would say that William Shakespeare
> himself was such a version.
>

If you think AlphaCode and Shakespeare have anything in common, then I
don't think your assertions about AI are worth much.


>
> *> Still counts as AI, but cannot be said to understand code.*
>
>
> I am a bit confused by your use of one word, you seem to be giving it a
> very unconventional meaning.  If you, being a human, "understand" code but
> the code you write is inferior to the code that an AI writes that doesn't
> "understand" code then I fail to see why any human or any machine would
> want to have an "understanding" of anything.
>

If you think a brute-force "generate a million guesses until one works"
strategy has the same understanding as an algorithm that employs a detailed
model of the domain and uses that model to generate a reasoned solution,
regardless of the results, then it's you that is employing the
unconventional meaning of "understand".

In the real world, you usually don't get to try something a million times
until something works.


>
> >> Just adding more input variables would be less complex than figuring
>>> out how to make a program smaller and faster.
>>>
>>
>> *> Think about it this way. There's diminishing returns on the strategy
>> to make the program smaller and faster, but potentially unlimited returns
>> on being able to respond to ever greater complexity in the problem
>> description.*
>>
>
> You're talking about what would be more useful, I was talking about what
> would be more complex. In general finding the smallest and fastest program
> that can accomplish a given task is infinitely complex, that is to say in
> general it's impossible to find the smallest program and prove it's the
> smallest program.  Code optimization is very far from a trivial problem.
>

I'm surprised you're focusing on the less useful direction to go in. If
anything, your thinking tends to be very pragmatic. Who cares if you can
squeeze a few extra milliseconds out of an algorithm, if you could instead
spend that effort doing something far more useful?


>
> John K ClarkSee what's on my new list at  Extropolis
> 
> tcp
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv2o1%2BL7E%3Da8nNFpu4vEVmuSJMfOHUGeJBhUsw7GEfbytw%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...

Re: AlphaZero

2022-02-05 Thread Brent Meeker



On 2/5/2022 12:51 PM, Terren Suydam wrote:



On Fri, Feb 4, 2022 at 6:18 PM John Clark  wrote:

On Fri, Feb 4, 2022 at 5:34 PM Terren Suydam
 wrote:

>> Look at thiscode for a subprogram and make something that
does the same thing but issmaller or runsfaster or both.
And that's not a toy problem, that's a real problem.


> "does the same thing" is problematic for a couple reasons.
The first is that AlphaCode doesn't know how to read code,


Huh? We already know AlphaCode can write code, how can something
know how to write but not read? It's easier to read a novel than
write a novel.


This is one case where your intuitions fail. I dug a little deeper 
into how AlphaCode works. It generates millions of candidate solutions 
using a model trained on github code. It then filters out 99% of those 
candidate solutions by running them against test cases provided in the 
problem description and removing the ones that fail. It then uses a 
different technique to whittle down the candidate solutions from 
several thousand to just ten. Nobody, neither the AI nor the humans 
running AlphaCode, know if the 10 solutions picked are correct.


Just like we don't know which interpretation of quantum mechanics is 
correct.  But we use it anyway.


AlphaCode is not capable of reading code. It's a clever version of 
monkeys typing on typewriters until they bang out a Shakespeare play. 
Still counts as AI, but cannot be said to understand code.


What does it mean "to read code"?  It can execute code, from github 
apparently so it must read code well enough to execute it.  What more 
does it need to read?  You say it cannot be said to understand code.  
Can you specify what would show it understands the code?


I think you mean it tells you story about how this part to does that and 
this other part does something else, etc.  But that's just catering to 
your weak human brain that can't just "see" that the code solves the 
problem.  The problem is that there is no species independent meaning of 
"understand" except "make it work". AlphaCode doesn't understand code 
like you do, because it doesn't think like you do and doesn't have the 
context you do.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/fa53145a-65b6-b5a2-49ec-46151fc80365%40gmail.com.


Re: AlphaZero

2022-02-05 Thread John Clark
On Sat, Feb 5, 2022 at 3:51 PM Terren Suydam 
wrote:

* > I dug a little deeper into how AlphaCode works. It generates millions
> of candidate solutions using a model trained on github code. It then
> filters out 99% of those candidate solutions by running them against test
> cases provided in the problem description and removing the ones that fail.
> It then uses a different technique to whittle down the candidate solutions
> from several thousand to just ten. *[...] AlphaCode is not capable of
> reading code.
>

How on earth can it filter out 99% of the code because it is bad code if it
cannot read code? Closer to home, how could somebody on this list tell the
difference between a post they like and a post they didn't like if they
couldn't read English?


> * > Nobody, neither the AI nor the humans running AlphaCode, know if the
> 10 solutions picked are correct.*
>

As Alan Turing said  "*If a machine is expected to be infallible, it cannot
also be intelligent*."

> It's a clever version of monkeys typing on typewriters until they bang
> out a Shakespeare play. Still counts as AI,
>

  A clever version indeed!! In fact I would say that William Shakespeare
himself was such a version.

*> Still counts as AI, but cannot be said to understand code.*


I am a bit confused by your use of one word, you seem to be giving it a
very unconventional meaning.  If you, being a human, "understand" code but
the code you write is inferior to the code that an AI writes that doesn't
"understand" code then I fail to see why any human or any machine would
want to have an "understanding" of anything.

>> Just adding more input variables would be less complex than figuring out
>> how to make a program smaller and faster.
>>
>
> *> Think about it this way. There's diminishing returns on the strategy to
> make the program smaller and faster, but potentially unlimited returns on
> being able to respond to ever greater complexity in the problem
> description.*
>

You're talking about what would be more useful, I was talking about what
would be more complex. In general finding the smallest and fastest program
that can accomplish a given task is infinitely complex, that is to say in
general it's impossible to find the smallest program and prove it's the
smallest program.  Code optimization is very far from a trivial problem.

John K ClarkSee what's on my new list at  Extropolis

tcp

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv2o1%2BL7E%3Da8nNFpu4vEVmuSJMfOHUGeJBhUsw7GEfbytw%40mail.gmail.com.


Re: AlphaZero

2022-02-05 Thread Quentin Anciaux
The only thing I hope AI will achieve is to be less condescending... if it
achieves true understanding,  I hope it will be humble... and as far as
John Clark dislikes religions and God,  the singularity will be God...

Quentin

Le sam. 5 févr. 2022, 20:51, Terren Suydam  a
écrit :

>
>
> On Fri, Feb 4, 2022 at 6:18 PM John Clark  wrote:
>
>> On Fri, Feb 4, 2022 at 5:34 PM Terren Suydam 
>> wrote:
>>
>> >> Look at this code for a subprogram and make something that does the
 same thing but is smaller or runs faster or both. And that's not a toy
 problem, that's a real problem.

>>>
>>> > "does the same thing" is problematic for a couple reasons. The first
>>> is that AlphaCode doesn't know how to read code,
>>>
>>
>> Huh? We already know AlphaCode can write code, how can something know
>> how to write but not read? It's easier to read a novel than write a novel.
>>
>
> This is one case where your intuitions fail. I dug a little deeper into
> how AlphaCode works. It generates millions of candidate solutions using a
> model trained on github code. It then filters out 99% of those candidate
> solutions by running them against test cases provided in the problem
> description and removing the ones that fail. It then uses a different
> technique to whittle down the candidate solutions from several thousand to
> just ten. Nobody, neither the AI nor the humans running AlphaCode, know if
> the 10 solutions picked are correct.
>
> AlphaCode is not capable of reading code. It's a clever version of monkeys
> typing on typewriters until they bang out a Shakespeare play. Still counts
> as AI, but cannot be said to understand code.
>
>
>>
>>> *> The other problem is that with that problem description, it won't
>>> evolve except in the very narrow sense of improving its efficiency.*
>>>
>>
>> It seems to me the ability to write code that was smaller and faster than
>> anybody else is not "very narrow", a human could make a very good living
>> indeed from that talent.  And if I was the guy that signed his enormous
>> paycheck and somebody offered me a program that would do the same thing he
>> did I'd jump at it.
>>
>
> This actually already exists in the form of optimizing compilers - which
> are the programs that translate human-readable code like Java into assembly
> language that microprocessors use to manipulate data. Optimizing compilers
> can make human code more efficient. But these gains are only available in
> very well-understood and limited ways. To do what you're suggesting
> requires machine intelligence capable of understanding things in a much
> broader context.
>
>
>>
>>
>>> *> The kind of problem description that might actually lead to a
>>> singularity is something like "Look at this code and make something that
>>> can solve ever more complex problem descriptions". But my hunch there is
>>> that that problem description is too complex for it to recursively
>>> self-improve towards.*
>>>
>>
>> Just adding more input variables would be less complex than figuring out
>> how to make a program smaller and faster.
>>
>
> Think about it this way. There's diminishing returns on the strategy to
> make the program smaller and faster, but potentially unlimited returns on
> being able to respond to ever greater complexity in the problem
> description.
>
>
>>
>> >> I think if Steven Spielberg's movie had been called AGI instead of AI
 some people today would no longer like the acronym AGI because too many
 people would know exactly what it means and thus would lack that certain
 aura of erudition and mystery that they crave . Everybody knows what AI
 means, but only a small select cognoscenti know the meaning of AGI. A
 Classic case of jargon creep.

>>>
>>> >Do you really expect a discipline as technical as AI to not use
>>> jargon?
>>>
>>
>> When totally new concepts come up, as they do occasionally in science,
>> jargon is necessary because there is no previously existing word or short
>> phrase that describes it, but that is not the primary generator of
>> jargon and is not in this case  because a very short word that describes
>> the idea already exists and everybody already knows what AI means, but
>> very few know that AGI means the same thing. And some see that as AGI's
>> great virtue, it's mysterious and sounds brainy.
>>
>>
>>> *> You use physics jargon all the time.*
>>>
>>
>> I do try to keep that to a minimum, perhaps I should try harder.
>>
>
> I don't hold it against you, and I certainly don't think you're trying to
> cultivate an aura of erudition and mystery when you do. I'm not sure why
> you seem to have an axe to grind about the use of AGI, but it is a useful
> distinction to make. It's clear we have AI today. And it's equally clear we
> do not have AGI.
>
> Terren
>
>
>>
>> John K ClarkSee what's on my new list at  Extropolis
>> 
>> pjx
>>
>>
>>
>> --
>> You received this message because you are subscribed to the Goog

Re: AlphaZero

2022-02-05 Thread Terren Suydam
On Fri, Feb 4, 2022 at 6:18 PM John Clark  wrote:

> On Fri, Feb 4, 2022 at 5:34 PM Terren Suydam 
> wrote:
>
> >> Look at this code for a subprogram and make something that does the
>>> same thing but is smaller or runs faster or both. And that's not a toy
>>> problem, that's a real problem.
>>>
>>
>> > "does the same thing" is problematic for a couple reasons. The first
>> is that AlphaCode doesn't know how to read code,
>>
>
> Huh? We already know AlphaCode can write code, how can something know how
> to write but not read? It's easier to read a novel than write a novel.
>

This is one case where your intuitions fail. I dug a little deeper into how
AlphaCode works. It generates millions of candidate solutions using a model
trained on github code. It then filters out 99% of those candidate
solutions by running them against test cases provided in the problem
description and removing the ones that fail. It then uses a different
technique to whittle down the candidate solutions from several thousand to
just ten. Nobody, neither the AI nor the humans running AlphaCode, know if
the 10 solutions picked are correct.

AlphaCode is not capable of reading code. It's a clever version of monkeys
typing on typewriters until they bang out a Shakespeare play. Still counts
as AI, but cannot be said to understand code.


>
>> *> The other problem is that with that problem description, it won't
>> evolve except in the very narrow sense of improving its efficiency.*
>>
>
> It seems to me the ability to write code that was smaller and faster than
> anybody else is not "very narrow", a human could make a very good living
> indeed from that talent.  And if I was the guy that signed his enormous
> paycheck and somebody offered me a program that would do the same thing he
> did I'd jump at it.
>

This actually already exists in the form of optimizing compilers - which
are the programs that translate human-readable code like Java into assembly
language that microprocessors use to manipulate data. Optimizing compilers
can make human code more efficient. But these gains are only available in
very well-understood and limited ways. To do what you're suggesting
requires machine intelligence capable of understanding things in a much
broader context.


>
>
>> *> The kind of problem description that might actually lead to a
>> singularity is something like "Look at this code and make something that
>> can solve ever more complex problem descriptions". But my hunch there is
>> that that problem description is too complex for it to recursively
>> self-improve towards.*
>>
>
> Just adding more input variables would be less complex than figuring out
> how to make a program smaller and faster.
>

Think about it this way. There's diminishing returns on the strategy to
make the program smaller and faster, but potentially unlimited returns on
being able to respond to ever greater complexity in the problem
description.


>
> >> I think if Steven Spielberg's movie had been called AGI instead of AI
>>> some people today would no longer like the acronym AGI because too many
>>> people would know exactly what it means and thus would lack that certain
>>> aura of erudition and mystery that they crave . Everybody knows what AI
>>> means, but only a small select cognoscenti know the meaning of AGI. A
>>> Classic case of jargon creep.
>>>
>>
>> >Do you really expect a discipline as technical as AI to not use jargon?
>>
>
> When totally new concepts come up, as they do occasionally in science,
> jargon is necessary because there is no previously existing word or short
> phrase that describes it, but that is not the primary generator of jargon
> and is not in this case  because a very short word that describes the
> idea already exists and everybody already knows what AI means, but very
> few know that AGI means the same thing. And some see that as AGI's great
> virtue, it's mysterious and sounds brainy.
>
>
>> *> You use physics jargon all the time.*
>>
>
> I do try to keep that to a minimum, perhaps I should try harder.
>

I don't hold it against you, and I certainly don't think you're trying to
cultivate an aura of erudition and mystery when you do. I'm not sure why
you seem to have an axe to grind about the use of AGI, but it is a useful
distinction to make. It's clear we have AI today. And it's equally clear we
do not have AGI.

Terren


>
> John K ClarkSee what's on my new list at  Extropolis
> 
> pjx
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv0344k0h7t3EjYtgsW5-652P_qieSqyXCtOiAr9zAnmOQ%40mail.gmail.com
> 

RE: Understanding climate [was Re: AlphaZero]

2022-02-05 Thread spudboy100 via Everything List

Good point here from Thomaz. Because models are models and they're only as good 
as their input and sometimes not even it as much as that. Simply for safety 
sake though I would presume that some kind of massive climate inundation is 
possible some kind of cycle of drought and storm simply because the Earth 
produces this all on its own and pumping carbon into the atmosphere ain't doing 
anything any good. Simply for safety sake I would go with a presumption that 
climate innovation or drought is entirely possible perhaps shockingly so! 
Having said that I think we should go solar and wind big time as a precaution 
along with batteries of course because damn it those two sources really need 
it. We have the development of perovskite solar cells and improved batteries so 
we should put these into application.
On Friday, February 4, 2022 Tomasz Rola  
wrote:
On Fri, Feb 04, 2022 at 10:59:44AM -0800, Brent Meeker wrote:
> Well consider the example of climate. Nobody can grasp all factors
> in climate and their interactions. But we can model all of them in
> a global climate simulation.

Actually, as far as I can tell, we cannot. Or, you mean,
theoretically, sure, but in practice, I would say no, such model had
not been made yet.

If such model existed, it would give an answer about how/why the last
glaciation started and how/why it ended. But I understand such
answers were not given yet. Only speculations.

So, seems to me, whatever model we have, it cannot predict past
events. I would not bet any money on anything such model says about
other things.

> So climatologists+simulations "grasp the domain" even though humans
> can't.

Who are "climatologists"?

> Now suppose we want to extend these predictive climate models to
> include predictions about what humans will do in response. We don't
> know how humans will behave except in some general statistical
> terms.

And yet I can give you some good prediction. We use to do as small as
possible, for as long as possible. I predict this is not going to
change anytime soon. And not later either.

-- 
Regards,
Tomasz Rola

--
** A C programmer asked whether computer had Buddha's nature.      **
** As the answer, master did "rm -rif" on the programmer's home    **
** directory. And then the C programmer became enlightened...      **
**                                                                **
** Tomasz Rola          mailto:tomasz_r...@bigfoot.com            **

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/20220205001420.GA27521%40tau1.ceti.pl.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/803261940.111929.1644094175841%40mail.yahoo.com.


Re: AlphaZero

2022-02-05 Thread spudboy100 via Everything List

We are a new organism machine intelligence combined with human beings. 
Benefiting both subspecies. Where new species just consider us linked by Wi-Fi 
brain to brain so to speak. For practical reasons we just divvy up the entire 
solar output of the solar system they can get the electricity they need we get 
the electricity we need it's win-win like the double mint twins two mints and 
one

On Friday, February 4, 2022 Lawrence Crowell  
wrote:


All of this is coming like a tidal wave. A couple of years ago an AI was given 
data on the appearance of the sky over several decades. In particular to 
positions of planets were given. The system within a few days output not only 
the Copernican model but Kepler's laws. The time is coming in a couple of 
decades where if some human or humans do not figure out quantum gravitation 
some AI system will. Many other things are being turned over to AI and robots. 
It may not be too long before humans are obsolete.


LC


On Thursday, February 3, 2022 at 3:27:18 PM UTC-6 johnk...@gmail.com wrote:

On Thu, Feb 3, 2022 at 2:11 PM Terren Suydam  wrote:




 > the code generated by the AI still needs to be understandable


Once  AI starts to get to be really smart that's never going to happen, even 
today nobody knows how a neural network like AlphaZero works or understands the 
reasoning behind it making a particular move but that doesn't matter because 
understandable or not  AlphaZero can still play chess better than anybody 
alive, and if humans don't understand how that can be than that's just too bad 
for them.


> The hard part is understanding the problem your code is supposed to solve, 
> understanding the tradeoffs between different approaches, and being able to 
> negotiate with stakeholders about what the best approach is.


You seem to be assuming that the "stakeholders", those that intend to use the 
code once it is completed, will always be humans, and I think that is an 
entirely unwarranted assumption. The stakeholders will certainly have brains, 
but they may be hard and dry and not wet and squishy. 


> It'll be a very long time before we're handing that domain off to an AI.


I think you're whistling past the graveyard.  


John K Clark    See what's on my new list at  Extropolis
skg


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/everything-list/fbe23515-c727-4562-914f-cb635cc38da0n%40googlegroups.com
.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/1858276594.144274.1644093415320%40mail.yahoo.com.


Re: AlphaZero

2022-02-05 Thread John Clark
On Fri, Feb 4, 2022 at 7:34 PM Tomasz Rola  wrote:

>
>
>
>
>
>
>
> *from the point of view of Earth-based observer, planets are just another
> light points on the night sky, only moving strangely. Such an observer has
> no way to say if, for example, Mars is really different, or even a planet,
> or if it is maybe some star which looks for a place to stick and stop
> forever. Only with telescope one can see that Mars is a disc, and disc
> changes with time, but periodically, etc etc. But objective observer has no
> way to say that Mars is a planet based on merely naked eye observations,*


That is not true, even the ancient Egyptian's knew there was something
special about Mercury, Venus, Mars, Jupiter and Saturn. Johannes Kepler
didn't know the planets could be resolved into discs and he didn't need to
know that to derive his 3 laws of planetary motion, and a computer wouldn't
need to know that either. Kepler didn't have a telescope but he did have
Tycho Brahe's excellent naked eye measurement of the movements of the
planets over many decades, especially detailed were those of Mars. Kepler
knew that the planets moved in a complicated path relative to the fixed
stars and then after a fixed amount of time that was different for each
planet the path repeated. Kepler spent years developing a very complicated
earth centered model with lots of epicycles that fit Tyco's data pretty
well, and a lesser scientist might've been satisfied with that but Kepler
was not. Kepler knew Tyco's data was very good and the discrepancy between
theory and observation was just too large to ignore, so reluctantly he
junked years of work and went back to square one.  After more years of work
he found a sun centered model and his three laws and that fit Tyco's data
almost exactly and was much simpler.

John K ClarkSee what's on my new list at  Extropolis

tbk

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv37XKNGmfX%2BV5X3%3DXJebx48yo3D6dU0dr%2B5_LC5YAy6OQ%40mail.gmail.com.


Re: AlphaZero

2022-02-04 Thread Tomasz Rola
On Fri, Feb 04, 2022 at 01:36:59PM -0800, Lawrence Crowell wrote:
> All of this is coming like a tidal wave. A couple of years ago an AI was 
> given data on the appearance of the sky over several decades. In particular 
> to positions of planets were given. The system within a few days output not 
> only the Copernican model but Kepler's laws.

This is unclear. What exactly did the "ai" do? Had it gone throu a
solution space of all models explaining celestial bodies, complete
with turtles standing on elephants and shaking whenever there is
earthquake in Tokyo?

Do you have a link to some description of what the experiment was
exactly?

Because from the point of view of Earth-based observer, planets are
just another light points on the night sky, only moving strangely. Such
an observer has no way to say if, for example, Mars is really
different, or even a planet, or if it is maybe some star which looks
for a place to stick and stop forever. Only with telescope one can see
that Mars is a disc, and disc changes with time, but periodically, etc
etc.

But objective observer has no way to say that Mars is a planet based
on merely naked eye observations, I am afraid. It took a lot of
astrophotography to get some understanding about our neighbourhood.

Thus I suspect that "ai" had been fed assumptions about what it was
expected to find.

-- 
Regards,
Tomasz Rola

--
** A C programmer asked whether computer had Buddha's nature.  **
** As the answer, master did "rm -rif" on the programmer's home**
** directory. And then the C programmer became enlightened...  **
** **
** Tomasz Rola  mailto:tomasz_r...@bigfoot.com **

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/20220205003428.GB27521%40tau1.ceti.pl.


Understanding climate [was Re: AlphaZero]

2022-02-04 Thread Tomasz Rola
On Fri, Feb 04, 2022 at 10:59:44AM -0800, Brent Meeker wrote:
> Well consider the example of climate. Nobody can grasp all factors
> in climate and their interactions. But we can model all of them in
> a global climate simulation.

Actually, as far as I can tell, we cannot. Or, you mean,
theoretically, sure, but in practice, I would say no, such model had
not been made yet.

If such model existed, it would give an answer about how/why the last
glaciation started and how/why it ended. But I understand such
answers were not given yet. Only speculations.

So, seems to me, whatever model we have, it cannot predict past
events. I would not bet any money on anything such model says about
other things.

> So climatologists+simulations "grasp the domain" even though humans
> can't.

Who are "climatologists"?

> Now suppose we want to extend these predictive climate models to
> include predictions about what humans will do in response. We don't
> know how humans will behave except in some general statistical
> terms.

And yet I can give you some good prediction. We use to do as small as
possible, for as long as possible. I predict this is not going to
change anytime soon. And not later either.

-- 
Regards,
Tomasz Rola

--
** A C programmer asked whether computer had Buddha's nature.  **
** As the answer, master did "rm -rif" on the programmer's home**
** directory. And then the C programmer became enlightened...  **
** **
** Tomasz Rola  mailto:tomasz_r...@bigfoot.com **

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/20220205001420.GA27521%40tau1.ceti.pl.


Re: AlphaZero

2022-02-04 Thread John Clark
On Fri, Feb 4, 2022 at 5:34 PM Terren Suydam 
wrote:

>> Look at this code for a subprogram and make something that does the same
>> thing but is smaller or runs faster or both. And that's not a toy
>> problem, that's a real problem.
>>
>
> > "does the same thing" is problematic for a couple reasons. The first is
> that AlphaCode doesn't know how to read code,
>

Huh? We already know AlphaCode can write code, how can something know how
to write but not read? It's easier to read a novel than write a novel.


> *> The other problem is that with that problem description, it won't
> evolve except in the very narrow sense of improving its efficiency.*
>

It seems to me the ability to write code that was smaller and faster than
anybody else is not "very narrow", a human could make a very good living
indeed from that talent.  And if I was the guy that signed his enormous
paycheck and somebody offered me a program that would do the same thing he
did I'd jump at it.


> *> The kind of problem description that might actually lead to a
> singularity is something like "Look at this code and make something that
> can solve ever more complex problem descriptions". But my hunch there is
> that that problem description is too complex for it to recursively
> self-improve towards.*
>

Just adding more input variables would be less complex than figuring out
how to make a program smaller and faster.

>> I think if Steven Spielberg's movie had been called AGI instead of AI
>> some people today would no longer like the acronym AGI because too many
>> people would know exactly what it means and thus would lack that certain
>> aura of erudition and mystery that they crave . Everybody knows what AI
>> means, but only a small select cognoscenti know the meaning of AGI. A
>> Classic case of jargon creep.
>>
>
> >Do you really expect a discipline as technical as AI to not use jargon?
>

When totally new concepts come up, as they do occasionally in science,
jargon is necessary because there is no previously existing word or short
phrase that describes it, but that is not the primary generator of jargon
and is not in this case  because a very short word that describes the idea
already exists and everybody already knows what AI means, but very few know
that AGI means the same thing. And some see that as AGI's great virtue,
it's mysterious and sounds brainy.


> *> You use physics jargon all the time.*
>

I do try to keep that to a minimum, perhaps I should try harder.

John K ClarkSee what's on my new list at  Extropolis

pjx

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv0344k0h7t3EjYtgsW5-652P_qieSqyXCtOiAr9zAnmOQ%40mail.gmail.com.


Re: AlphaZero

2022-02-04 Thread Terren Suydam
On Fri, Feb 4, 2022 at 4:47 PM John Clark  wrote:

> On Fri, Feb 4, 2022 at 12:36 PM Terren Suydam 
> wrote:
>
> >> I'll make you a deal, I'll tell you "what problem it is trying to
>>> solve" if you first tell me how long a piece of string is. And if you don't
>>> wanna do that just rephrase the question more clearly.
>>>
>>
>> *> lol ok. The worry you're articulating is that AlphaCode will turn its
>> coding abilities on itself and improve its own code, and that this could
>> lead to the singularity. First, it must be said that AlphaCode is a tool
>> with no agency of its own.*
>>
>
> We're talking about fundamentals here and in that context I don't know
> what you mean by "agency". Any information processing mechanism can be
> reduced logically to a Turing Machine, and some machines will stop and
> produce an answer and some will never stop, and some Turing machines will
> produce a correct answer and some will not, and in general there's no way
> to know what a Turing machine is going to do, you just have to watch it and
> see and you might be waiting forever for it to stop and produce an answer.
>
>
> *> Left to its own devices, it will do... nothing.*
>
>
> There's no way you could know that. Even if you knew the exact state a
> huge neural net like AlphaZero was in, which is very unlikely, there is no
> way you could predict which state it would evolve into unless you could
> play chess as well as it can, which you cannot. In general the only way to
> know what a large neural network (which can always be logically reduced to
> a Turing Machine) will do is to just watch it and see, there is no
> shortcut. For a long time it might look like it's doing nothing and then
> suddenly start doing something, and that something might be something you
> don't like.
>
>
Have you ever written a program?  Because you talk like someone who gets
theoretical computation concepts but has not actually ever coded anything.


>
> *> But let's say the DeepMind team wanted to improve AlphaCode by applying
>> AlphaCode to itself. My question to you is, what is the "toy problem" they
>> would feed to AlphaCode? How do you define that problem? *
>>
>
> Look at this code for a subprogram and make something that does the same
> thing but is smaller or runs faster or both. And that's not a toy
> problem, that's a real problem.
>

"does the same thing" is problematic for a couple reasons. The first is
that AlphaCode doesn't know how to read code, but let's say that it could.
The other problem is that with that problem description, it won't evolve
except in the very narrow sense of improving its efficiency. The kind of
problem description that might actually lead to a singularity is something
like "Look at this code and make something that can solve ever more complex
problem descriptions". But my hunch there is that *that* problem
description is too complex for it to recursively self-improve towards.


>  >> an AI could have a detailed intellectual conversation with 1000
>>> people at the same time, or a million, or a billion.
>>>
>>
>> *> Sure, but those interactions still take time, perhaps days or even
>> months. And you're assuming that many people will want to have
>> conversations with an AI.*
>>
>
> Yes, I am assuming that, and I think it's a very reasonable assumption. If
> an intelligent AI thinks she could learn important stuff from talking to
> people it can simply turn up its charm variable so that people want to talk
> to her (or him). I suggest you take a look at the movie "Her" which covers
> the exact theme I'm talking about, a charismatic and brilliant AI having
> interesting and intimate conversations with thousands of people at exactly
> the same time. I think it's one of the best science-fiction movies ever
> made even though some say it has a depressing ending. I disagree, I didn't
> find it depressing at all.
>
> Her <https://en.wikipedia.org/wiki/Her_(film)>
>
> *>Have you ever tried listening to a 6 year old try and tell a story? *
>>
>
> Have you ever listen to a genius tell a story?
>
>
You're already at the singularity if it can be charming and brilliant to
millions of people simultaneously. I thought we were talking about getting
to the singularity.


>
>
>> >> If humans can do it then an AI can do it too because knowledge is
>>> just highly computed information, and wisdom is just highly computed
>>> knowledge.
>>>
>>
>> *> Sure, I can hand-wave th

Re: AlphaZero

2022-02-04 Thread Terren Suydam
Just to keep this focused on programmers losing their jobs - how this
started - by grasping the problem domain, I just mean that an AI should
know how to model and operate in that domain such that it can formulate and
act on plans that give it the potential to outperform humans. My hunch is
that AIs won't outperform humans at this task until they grasp a much
larger problem domain than people generally assume.

On Fri, Feb 4, 2022 at 1:59 PM Brent Meeker  wrote:

> Well consider the example of climate.  Nobody can grasp all factors in
> climate and their interactions.  But we can model all of them in a global
> climate simulation.  So climatologists+simulations "grasp the domain"  even
> though humans can't.  Now suppose we want to extend these predictive
> climate models to include predictions about what humans will do in
> response.  We don't know how humans will behave except in some general
> statistical terms.  We don't know whether they will build nuclear
> powerplants or not.  Whether they will go to war over immigration or not.
> An AI might be able to do that, but we certainly can't.   But if it did,
> would we believe it?  It can't explain it to us.
>
> Brent
>
> On 2/4/2022 8:55 AM, Terren Suydam wrote:
>
> I think for programmers to lose their jobs to AIs, AIs will need to grasp
> the problem domain, and I'm suggesting that's far too advanced for today's
> AI, and I think it's a long way off, because the problem domain for
> programmers entails knowing a lot about how humans behave, what they're
> good at, and bad at, what they value, and so on, not to mention the
> domain-specific knowledge that is necessary to understand the problem in
> the first place.
>
> On Thu, Feb 3, 2022 at 8:23 PM Brent Meeker  wrote:
>
>> So AI's won't need to "grasp the problem domain" to be effective.  Which
>> may well be true.  What we call "grasping the problem" domain is being able
>> to tell simple stories about it that other people can grasp and understand,
>> say by reading a book.  An AI may "grasp the problem" in some much more
>> comprehensive way that is too much for a human to comprehend and the human
>> will say the AI is just calculating and doesn't understand the problem
>> because it can't explain it to humans.
>>
>> That's sort of what we do when we write simulations of complex things.
>> They are too complex for us to see what will happen and so we use the
>> computer to tell us what will happen.  The computer can't "explain the
>> result" to us and we can't grasp the whole domain of the computation, but
>> we can grasp the result.
>>
>> Brent
>>
>> On 2/3/2022 4:29 PM, Terren Suydam wrote:
>>
>>
>> Being able to grasp the problem domain is not the same thing as being
>> effective in it.
>>
>> On Thu, Feb 3, 2022 at 6:07 PM Brent Meeker 
>> wrote:
>>
>>>
>>> I think "able to grasp the problem domain we're talking about" is giving
>>> us way to much credit.  Every study of stock traders I've seen says that
>>> they do no better than some simple rules of thumb like index funds.
>>>
>>> Brent
>>>
>>>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAMy3ZA8cnqP1U4thdUQWv_AZk-zbOcrqk1u%3DeRzqzmvAHOFJMQ%40mail.gmail.com
> 
> .
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/c8285abf-d07c-cbd5-fc0e-f668f1a75a3a%40gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA9YS%3D3oQJvvx%2BAzcjOZfeTLQsRehRTp6XB1LRfbXmEi_g%40mail.gmail.com.


Re: AlphaZero

2022-02-04 Thread John Clark
On Fri, Feb 4, 2022 at 12:36 PM Terren Suydam 
wrote:

>> I'll make you a deal, I'll tell you "what problem it is trying to solve"
>> if you first tell me how long a piece of string is. And if you don't wanna
>> do that just rephrase the question more clearly.
>>
>
> *> lol ok. The worry you're articulating is that AlphaCode will turn its
> coding abilities on itself and improve its own code, and that this could
> lead to the singularity. First, it must be said that AlphaCode is a tool
> with no agency of its own.*
>

We're talking about fundamentals here and in that context I don't know what
you mean by "agency". Any information processing mechanism can be reduced
logically to a Turing Machine, and some machines will stop and produce an
answer and some will never stop, and some Turing machines will produce a
correct answer and some will not, and in general there's no way to know
what a Turing machine is going to do, you just have to watch it and see and
you might be waiting forever for it to stop and produce an answer.

*> Left to its own devices, it will do... nothing.*


There's no way you could know that. Even if you knew the exact state a huge
neural net like AlphaZero was in, which is very unlikely, there is no way
you could predict which state it would evolve into unless you could play
chess as well as it can, which you cannot. In general the only way to know
what a large neural network (which can always be logically reduced to a
Turing Machine) will do is to just watch it and see, there is no shortcut.
For a long time it might look like it's doing nothing and then suddenly
start doing something, and that something might be something you don't
like.


*> But let's say the DeepMind team wanted to improve AlphaCode by applying
> AlphaCode to itself. My question to you is, what is the "toy problem" they
> would feed to AlphaCode? How do you define that problem? *
>

Look at this code for a subprogram and make something that does the same
thing but is smaller or runs faster or both. And that's not a toy problem,
that's a real problem.

 >> an AI could have a detailed intellectual conversation with 1000 people
>> at the same time, or a million, or a billion.
>>
>
> *> Sure, but those interactions still take time, perhaps days or even
> months. And you're assuming that many people will want to have
> conversations with an AI.*
>

Yes, I am assuming that, and I think it's a very reasonable assumption. If
an intelligent AI thinks she could learn important stuff from talking to
people it can simply turn up its charm variable so that people want to talk
to her (or him). I suggest you take a look at the movie "Her" which covers
the exact theme I'm talking about, a charismatic and brilliant AI having
interesting and intimate conversations with thousands of people at exactly
the same time. I think it's one of the best science-fiction movies ever
made even though some say it has a depressing ending. I disagree, I didn't
find it depressing at all.

Her <https://en.wikipedia.org/wiki/Her_(film)>

*>Have you ever tried listening to a 6 year old try and tell a story? *
>

Have you ever listen to a genius tell a story?



> >> If humans can do it then an AI can do it too because knowledge is just
>> highly computed information, and wisdom is just highly computed knowledge.
>>
>
> *> Sure, I can hand-wave things away too. "Highly computed" means what
> exactly?*
>

It exactly means that a high number of FLOPS are necessary but not
sufficient.

> *I can reverse every word in this post. If I did that a million times in
> a row it would be "highly computed" but it wouldn't result in knowledge,
> much less wisdom.*
>

Obviously the computation must be done intelligently. I've had debates of
this sort before and at this point it is traditional for my opponent to
demand that I define "intelligently", and I will be happy to do so if you
first define "define", and then define "define "define"" and then...


> *> And I'm not talking about mere information, *
>>>
>>
>> >> Mere information? Mere?!
>>
>
> > As opposed to knowledge, wisdom, the ability to model aspects of the
> world and simulate them, the ability to explain things, etc.
>

How do you expect to be able to do any of this without processing
information?!

>>You need AI, AGI is just loquacious technobabble used to make things
>> sound more inscrutable.
>>
>
> *> Doesn't seem all that loquacious to me. AGI just adds the word
> "general",*
>

I think if Steven Spielberg's movie had been called AGI instead

Re: AlphaZero

2022-02-04 Thread Lawrence Crowell
All of this is coming like a tidal wave. A couple of years ago an AI was 
given data on the appearance of the sky over several decades. In particular 
to positions of planets were given. The system within a few days output not 
only the Copernican model but Kepler's laws. The time is coming in a couple 
of decades where if some human or humans do not figure out quantum 
gravitation some AI system will. Many other things are being turned over to 
AI and robots. It may not be too long before humans are obsolete.

LC

On Thursday, February 3, 2022 at 3:27:18 PM UTC-6 johnk...@gmail.com wrote:

> On Thu, Feb 3, 2022 at 2:11 PM Terren Suydam  wrote:
>
>  > *the code generated by the AI still needs to be understandable*
>
>
> Once  AI starts to get to be really smart that's never going to happen, 
> even today nobody knows how a neural network like AlphaZero works or 
> understands the reasoning behind it making a particular move but that 
> doesn't matter because understandable or not  AlphaZero can still play 
> chess better than anybody alive, and if humans don't understand how that 
> can be than that's just too bad for them.
>
> > *The hard part is understanding the problem your code is supposed to 
>> solve, understanding the tradeoffs between different approaches, and being 
>> able to negotiate with stakeholders about what the best approach is.*
>
>
> You seem to be assuming that the "stakeholders", those that intend to use 
> the code once it is completed, will always be humans, and I think that is 
> an entirely unwarranted assumption. The stakeholders will certainly have 
> brains, but they may be hard and dry and not wet and squishy. 
>
> *> It'll be a very long time before we're handing that domain off to an 
>> AI.*
>
>
> I think you're whistling past the graveyard.  
>
> John K ClarkSee what's on my new list at  Extropolis 
> <https://groups.google.com/g/extropolis>
> skg
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/fbe23515-c727-4562-914f-cb635cc38da0n%40googlegroups.com.


Re: AlphaZero

2022-02-04 Thread Brent Meeker
Well consider the example of climate.  Nobody can grasp all factors in 
climate and their interactions.  But we can model all of them in a 
global climate simulation.  So climatologists+simulations "grasp the 
domain"  even though humans can't.  Now suppose we want to extend these 
predictive climate models to include predictions about what humans will 
do in response.  We don't know how humans will behave except in some 
general statistical terms.  We don't know whether they will build 
nuclear powerplants or not.  Whether they will go to war over 
immigration or not.  An AI might be able to do that, but we certainly 
can't.   But if it did, would we believe it? It can't explain it to us.


Brent

On 2/4/2022 8:55 AM, Terren Suydam wrote:
I think for programmers to lose their jobs to AIs, AIs will need to 
grasp the problem domain, and I'm suggesting that's far too advanced 
for today's AI, and I think it's a long way off, because the problem 
domain for programmers entails knowing a lot about how humans behave, 
what they're good at, and bad at, what they value, and so on, not to 
mention the domain-specific knowledge that is necessary to understand 
the problem in the first place.


On Thu, Feb 3, 2022 at 8:23 PM Brent Meeker  wrote:

So AI's won't need to "grasp the problem domain" to be effective. 
Which may well be true.  What we call "grasping the problem"
domain is being able to tell simple stories about it that other
people can grasp and understand, say by reading a book.  An AI may
"grasp the problem" in some much more comprehensive way that is
too much for a human to comprehend and the human will say the AI
is just calculating and doesn't understand the problem because it
can't explain it to humans.

That's sort of what we do when we write simulations of complex
things.  They are too complex for us to see what will happen and
so we use the computer to tell us what will happen.  The computer
can't "explain the result" to us and we can't grasp the whole
domain of the computation, but we can grasp the result.

Brent

On 2/3/2022 4:29 PM, Terren Suydam wrote:


Being able to grasp the problem domain is not the same thing as
being effective in it.

On Thu, Feb 3, 2022 at 6:07 PM Brent Meeker
 wrote:


I think "able to grasp the problem domain we're talking
about" is giving us way to much credit. Every study of stock
traders I've seen says that they do no better than some
simple rules of thumb like index funds.

Brent



--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA8cnqP1U4thdUQWv_AZk-zbOcrqk1u%3DeRzqzmvAHOFJMQ%40mail.gmail.com 
.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/c8285abf-d07c-cbd5-fc0e-f668f1a75a3a%40gmail.com.


Re: AlphaZero

2022-02-04 Thread Terren Suydam
gardless, to operate in the free-form world of humans, an AI needs to
>> be able to understand and react to a problem space that is constantly
>> changing. Changing rules (implicit and explicit), players, goals, dynamics,
>> etc.*
>>
>
> Well sure, but AIs have been able to do that for years, since the 1950's.
>

Care to give an example of AI in the 1950s that could do that?


>
> > *Is that possible to do without real understanding?*
>>
>
> No. If I can answer some questions and perform some tasks in a certain
> area then I could be confident in saying have some "real understanding" in
> that area of knowledge, and if you can answer  more questions and perform
> more tasks in that area than I can then I would say you have an even
> greater understanding than I do, and I don't care if your brain is wet and
> squishy or dry and hard.
>
>
OK.


> >> As I've mentioned before, the entire human genome is only 750
>>> megabytes, the new Mac operating system is about 20 times that size, and
>>> the genome contains instructions to build an entire human body not just a
>>> brain, and the genome is loaded with massive redundancy; so whatever the
>>> algorithm is that the brain uses to extract information from the
>>> environment there is simply no way it can be all that complicated.
>>>
>>
>> >
>> *The thing that makes intelligence intelligence is not simply extracting
>> information from the environment.*
>>
>
> How do you figure that? If human intelligence doesn't come from the 750
> MB in our genome and it doesn't come from the environment then where does
> this secret sauce come from? From an invisible man in the sky? If so then
> why does He only give it to brains that are wet and squishy.
>

Not sure how you got that from what I said. The point I'm making is that
intelligence, operationally speaking, is about far more than simply
extracting information from the environment. It's about making models of
the world that can be used for prediction, explanation, making plans,
coordinating, etc. Information extraction is necessary but not sufficient
for intelligence.


> >> Machines move so fast that at breakfast the singularity could look to
>>> a human like it's a very long way off, but by lunchtime the singularity
>>> could be ancient history.
>>>
>>
>> *> Do you think the singularity can occur with an AI that doesn't have
>> real understanding?*
>>
>
> Of course not! I have no objection to the term "real understanding", I
> only object when the term is used in a silly way, such as when I accomplish
> something in a certain field it demonstrates "real understanding" but even
> though an AI can do things in that same field even better and faster than I
> can it demonstrates nothing but a mindless reflex because its brain is dry
> and hard and not wet and squishy.
>

I agree with that. Presumably we'd also agree that AlphaGo and AlphaZero
have real understanding of go & chess, respectively.  I'm not sure
Stockfish does though, because a brute-force computational approach
leveraging heuristics given to it by humans strikes me as devoid of
understanding. Stockfish is closer to a computational prosthetic for human
minds.

To the larger point, where I think we disagree is how easy it is for an AI
to achieve real understanding of the real world of human interaction.

Terren


>
>  John K ClarkSee what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA_KPQ8NLoazKZm6K%2BujKYG-hoYHUcn%3DnwtBRJa4Rx6yzg%40mail.gmail.com.


Re: AlphaZero

2022-02-04 Thread Terren Suydam
I think for programmers to lose their jobs to AIs, AIs will need to grasp
the problem domain, and I'm suggesting that's far too advanced for today's
AI, and I think it's a long way off, because the problem domain for
programmers entails knowing a lot about how humans behave, what they're
good at, and bad at, what they value, and so on, not to mention the
domain-specific knowledge that is necessary to understand the problem in
the first place.

On Thu, Feb 3, 2022 at 8:23 PM Brent Meeker  wrote:

> So AI's won't need to "grasp the problem domain" to be effective.  Which
> may well be true.  What we call "grasping the problem" domain is being able
> to tell simple stories about it that other people can grasp and understand,
> say by reading a book.  An AI may "grasp the problem" in some much more
> comprehensive way that is too much for a human to comprehend and the human
> will say the AI is just calculating and doesn't understand the problem
> because it can't explain it to humans.
>
> That's sort of what we do when we write simulations of complex things.
> They are too complex for us to see what will happen and so we use the
> computer to tell us what will happen.  The computer can't "explain the
> result" to us and we can't grasp the whole domain of the computation, but
> we can grasp the result.
>
> Brent
>
> On 2/3/2022 4:29 PM, Terren Suydam wrote:
>
>
> Being able to grasp the problem domain is not the same thing as being
> effective in it.
>
> On Thu, Feb 3, 2022 at 6:07 PM Brent Meeker  wrote:
>
>>
>> I think "able to grasp the problem domain we're talking about" is giving
>> us way to much credit.  Every study of stock traders I've seen says that
>> they do no better than some simple rules of thumb like index funds.
>>
>> Brent
>>
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA8cnqP1U4thdUQWv_AZk-zbOcrqk1u%3DeRzqzmvAHOFJMQ%40mail.gmail.com.


Re: AlphaZero

2022-02-04 Thread John Clark
On Thu, Feb 3, 2022 at 7:20 PM Terren Suydam 
wrote:

*>>> AlphaCode can potentially improve its code, but to what end?  What
>>> problem is it trying to solve?  How does it know?*
>>>
>>
>> >> I don't understand your questions
>>
>
> > *What part is confusing?*
>

I'll make you a deal, I'll tell you "what problem it is trying to solve" if
you first tell me how long a piece of string is. And if you don't wanna do
that just rephrase the question more clearly.


>> Yeah with a human that process takes many decades, but even today
>> computers can process many many times more information than a human can,
>> not surprising when you consider the fact that the signals inside a human
>> brain only travel about 100 miles an hour while the signals in a computer
>> travel close to the speed of light, 186,000 miles a second.
>>
>
> *> Much of our learning takes place via interactions with other humans,
> and those cannot be sped up.*
>

Sure it can be, an AI could have a detailed intellectual conversation with
1000 people at the same time, or a million, or a billion.

* > I'm not talking about facts and information,*
>

You may not be talking about facts and information but I sure as hell I am
because information is as close as you can get to the traditional idea of
the soul without entering the realm of religion or some other form
of idiocy.

> *> but about theories of mind, understanding human motivations, forming
> and testing hypotheses about how to get goals met by interacting with other
> humans, and other animals for that matter.*
>

If humans can do it then an AI can do it too because knowledge is just
highly computed information, and wisdom is just highly computed knowledge.

*> And I'm not talking about mere information, *
>

Mere information? Mere?!

*> but models that can be simulated in what-if scenarios, true
> understanding. You need real AGI.*
>

You need AI, AGI is just loquacious technobabble used to make things sound
more inscrutable.

*> We probably need to define what understanding/comprehension actually
> means if we're going to take this much further.*
>

I don't think that would help one bit because fundamentally definitions are
not important in language, examples are. After all, examples are where
lexicographers get the knowledge to write the definitions for their book.
So I'd say that "understanding" is the thing that Einstein had about
physics to a greater extent than anybody else of his generation.

*> Regardless, to operate in the free-form world of humans, an AI needs to
> be able to understand and react to a problem space that is constantly
> changing. Changing rules (implicit and explicit), players, goals, dynamics,
> etc.*
>

Well sure, but AIs have been able to do that for years, since the 1950's.

> *Is that possible to do without real understanding?*
>

No. If I can answer some questions and perform some tasks in a certain area
then I could be confident in saying have some "real understanding" in that
area of knowledge, and if you can answer  more questions and perform more
tasks in that area than I can then I would say you have an even greater
understanding than I do, and I don't care if your brain is wet and squishy
or dry and hard.

>> As I've mentioned before, the entire human genome is only 750 megabytes,
>> the new Mac operating system is about 20 times that size, and the genome
>> contains instructions to build an entire human body not just a brain, and
>> the genome is loaded with massive redundancy; so whatever the algorithm is
>> that the brain uses to extract information from the environment there is
>> simply no way it can be all that complicated.
>>
>
> >
> *The thing that makes intelligence intelligence is not simply extracting
> information from the environment.*
>

How do you figure that? If human intelligence doesn't come from the 750 MB
in our genome and it doesn't come from the environment then where does this
secret sauce come from? From an invisible man in the sky? If so then why
does He only give it to brains that are wet and squishy.


> >> Machines move so fast that at breakfast the singularity could look to
>> a human like it's a very long way off, but by lunchtime the singularity
>> could be ancient history.
>>
>
> *> Do you think the singularity can occur with an AI that doesn't have
> real understanding?*
>

Of course not! I have no objection to the term "real understanding", I only
object when the term is used in a silly way, such as when I accomplish
something in a certain field it demonstrates "real understanding" but even
though an AI can do things in that same field even better and faster than I
can it demonstrates nothing but a mindless reflex because its brain is dry
and hard and not wet and squishy.

 John K ClarkSee what's on my new list at  Extropolis


wsq

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop 

Re: AlphaZero

2022-02-03 Thread Brent Meeker
So AI's won't need to "grasp the problem domain" to be effective. Which 
may well be true.  What we call "grasping the problem" domain is being 
able to tell simple stories about it that other people can grasp and 
understand, say by reading a book.  An AI may "grasp the problem" in 
some much more comprehensive way that is too much for a human to 
comprehend and the human will say the AI is just calculating and doesn't 
understand the problem because it can't explain it to humans.


That's sort of what we do when we write simulations of complex things.  
They are too complex for us to see what will happen and so we use the 
computer to tell us what will happen.  The computer can't "explain the 
result" to us and we can't grasp the whole domain of the computation, 
but we can grasp the result.


Brent

On 2/3/2022 4:29 PM, Terren Suydam wrote:


Being able to grasp the problem domain is not the same thing as being 
effective in it.


On Thu, Feb 3, 2022 at 6:07 PM Brent Meeker  wrote:


I think "able to grasp the problem domain we're talking about" is
giving us way to much credit.  Every study of stock traders I've
seen says that they do no better than some simple rules of thumb
like index funds.

Brent
-- 
You received this message because you are subscribed to the Google

Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit

https://groups.google.com/d/msgid/everything-list/16772563-e3dd-19d9-8241-095fbd3230b6%40gmail.com

.

--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA-YQ5fRN_H5ZB05GoZZAUq%2Bk7NDZDowVO%3DJOsYqFtdWcQ%40mail.gmail.com 
.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/6df0cbca-124d-9303-62e4-956976ea391f%40gmail.com.


Re: AlphaZero

2022-02-03 Thread Terren Suydam
Being able to grasp the problem domain is not the same thing as being
effective in it.

On Thu, Feb 3, 2022 at 6:07 PM Brent Meeker  wrote:

>
> I think "able to grasp the problem domain we're talking about" is giving
> us way to much credit.  Every study of stock traders I've seen says that
> they do no better than some simple rules of thumb like index funds.
>
> Brent
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/16772563-e3dd-19d9-8241-095fbd3230b6%40gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA-YQ5fRN_H5ZB05GoZZAUq%2Bk7NDZDowVO%3DJOsYqFtdWcQ%40mail.gmail.com.


Re: AlphaZero

2022-02-03 Thread Terren Suydam
On Thu, Feb 3, 2022 at 6:22 PM John Clark  wrote:

> On Thu, Feb 3, 2022 at 5:23 PM Terren Suydam 
> wrote:
>
>
>> *>AlphaCode can potentially improve its code, but to what end?  What
>> problem is it trying to solve?  How does it know?*
>>
>
> I don't understand your questions
>

What part is confusing?


>
> *> Imagine an AI tasked with making as much money in the stock market as
>> it can. Pretty clear signals for winning and losing (like chess). And
>> perhaps there's some easy wins there for an AI that can take advantage of
>> e.g. arbitrage (this exists already I believe) or other patterns that are
>> not exploitable by human brains. But it seems to me that actual
>> comprehension of the world of investment is key. Knowing how earnings
>> reports will affect the stock price of a company, relative to human
>> expectations about that earnings report.*
>>
>
> I agree, but if humans, or at least some extraordinary humans like Warren
> Buffett, can understand the stock market, or at least understand it well
> enough to do better at picking stocks than doing so randomly, then I see
> absolutely no reason why an AI couldn't do the same thing, and do it better.
>
> *> You have to import a universe of knowledge of the human domain to be
>> effective*
>>
>
> Yeah, when you're born you don't know anything but over time you gain
> knowledge from the environment.
>
> > *a universe we take for granted since we've acquired it over decades of
>> training.*
>>
>
> Yeah with a human that process takes many decades, but even today
> computers can process many many times more information than a human can,
> not surprising when you consider the fact that the signals inside a human
> brain only travel about 100 miles an hour while the signals in a computer
> travel close to the speed of light, 186,000 miles a second.
>

Much of our learning takes place via interactions with other humans, and
those cannot be sped up. I'm not talking about facts and information, but
about theories of mind, understanding human motivations, forming and
testing hypotheses about how to get goals met by interacting with other
humans, and other animals for that matter. To be effective in a human
world, an AI would similarly need to form theories of mind about humans.
Can this be done without interacting with humans?  I doubt it.


>
> *> And I'm not talking about mere information, but models that can be
>> simulated in what-if scenarios, true understanding. You need real AGI.*
>>
>
> I can't think of a more flagrant example of moving goal posts. I clearly
> remember when nearly everybody said it would require "real understanding"
> for a computer to play chess at the grandmaster level, never mind the
> superhuman level, but nobody says that anymore. Much more recently people
> said image recognition would require "real intelligence" but few say that
> anymore, now they say coding requires "real intelligence". "Real AGI" is
> a machine that can do what a computer cannot do, *YET*.
>

Not that you would know, but I never said that about chess (or go). I don't
think real understanding is *required* for image recognition, but it would
surely help. I'm not sure how AlphaCoder works yet, so I can't comment on
whether there's some kind of primitive understanding going on there.  We
probably need to define what understanding/comprehension actually means if
we're going to take this much further.

Regardless, to operate in the free-form world of humans, an AI needs to be
able to understand and react to a problem space that is constantly
changing. Changing rules (implicit and explicit), players, goals, dynamics,
etc. Is that possible to do without real understanding?


> *>I think the problem of AGI is much harder than most assume.  *
>
>
> As I've mentioned before, the entire human genome is only 750 megabytes,
> the new Mac operating system is about 20 times that size, and the genome
> contains instructions to build an entire human body not just a brain, and
> the genome is loaded with massive redundancy; so whatever the algorithm is
> that the brain uses to extract information from the environment there is
> simply no way it can be all that complicated.
>

The thing that makes intelligence intelligence is not simply extracting
information from the environment.


>
>> *> To get to the point where machines are the stakeholders, we're already
>> past the singularity.*
>>
>
> Machines move so fast that at breakfast the singularity could look to a
> human like it's a very long way off, but by lunchtime the singularity could
> be ancient history.
>
>
Do you think the singularity can occur with an AI that doesn't have real
understanding?

Terren


> John K ClarkSee what's on my new list at  Extropolis
> 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr..

Re: AlphaZero

2022-02-03 Thread John Clark
On Thu, Feb 3, 2022 at 5:23 PM Terren Suydam 
wrote:


> *>AlphaCode can potentially improve its code, but to what end?  What
> problem is it trying to solve?  How does it know?*
>

I don't understand your questions

*> Imagine an AI tasked with making as much money in the stock market as it
> can. Pretty clear signals for winning and losing (like chess). And perhaps
> there's some easy wins there for an AI that can take advantage of e.g.
> arbitrage (this exists already I believe) or other patterns that are not
> exploitable by human brains. But it seems to me that actual comprehension
> of the world of investment is key. Knowing how earnings reports will affect
> the stock price of a company, relative to human expectations about that
> earnings report.*
>

I agree, but if humans, or at least some extraordinary humans like Warren
Buffett, can understand the stock market, or at least understand it well
enough to do better at picking stocks than doing so randomly, then I see
absolutely no reason why an AI couldn't do the same thing, and do it better.

*> You have to import a universe of knowledge of the human domain to be
> effective*
>

Yeah, when you're born you don't know anything but over time you gain
knowledge from the environment.

> *a universe we take for granted since we've acquired it over decades of
> training.*
>

Yeah with a human that process takes many decades, but even today computers
can process many many times more information than a human can, not
surprising when you consider the fact that the signals inside a human brain
only travel about 100 miles an hour while the signals in a computer travel
close to the speed of light, 186,000 miles a second.

*> And I'm not talking about mere information, but models that can be
> simulated in what-if scenarios, true understanding. You need real AGI.*
>

I can't think of a more flagrant example of moving goal posts. I clearly
remember when nearly everybody said it would require "real understanding"
for a computer to play chess at the grandmaster level, never mind the
superhuman level, but nobody says that anymore. Much more recently people
said image recognition would require "real intelligence" but few say that
anymore, now they say coding requires "real intelligence". "Real AGI" is a
machine that can do what a computer cannot do, *YET*.

*>I think the problem of AGI is much harder than most assume.  *


As I've mentioned before, the entire human genome is only 750 megabytes,
the new Mac operating system is about 20 times that size, and the genome
contains instructions to build an entire human body not just a brain, and
the genome is loaded with massive redundancy; so whatever the algorithm is
that the brain uses to extract information from the environment there is
simply no way it can be all that complicated.


> *> To get to the point where machines are the stakeholders, we're already
> past the singularity.*
>

Machines move so fast that at breakfast the singularity could look to a
human like it's a very long way off, but by lunchtime the singularity could
be ancient history.

John K ClarkSee what's on my new list at  Extropolis


>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv1ga4pf5ejtufSVocOmUzk9zYMyaSLegED%2B7CJU0T4J1A%40mail.gmail.com.


Re: AlphaZero

2022-02-03 Thread Brent Meeker



On 2/3/2022 2:23 PM, Terren Suydam wrote:



On Thu, Feb 3, 2022 at 4:27 PM John Clark  wrote:

On Thu, Feb 3, 2022 at 2:11 PM Terren Suydam
 wrote:

> /the code generated by the AI still needs to be understandable/


Once  AI starts to get to be really smart that's never going to
happen, even today nobody knows how a neural network like
AlphaZero works or understands the reasoning behind it making a
particular move but that doesn't matter because understandable or
not  AlphaZero can still play chess better than anybody alive, and
if humans don't understand how that can be than that's just too
bad for them.


With chess it's clear what the game is, what the rules are, how to win 
and lose. In real life, the game constantly changes. AlphaCode can 
potentially improve its code, but to what end?  What problem is it 
trying to solve?  How does it know?


Even in domains with seemingly simple goals, it's a problem. Imagine 
an AI tasked with making as much money in the stock market as it can. 
Pretty clear signals for winning and losing (like chess). And perhaps 
there's some easy wins there for an AI that can take advantage of e.g. 
arbitrage (this exists already I believe) or other patterns that are 
not exploitable by human brains. But it seems to me that actual 
comprehension of the world of investment is key. Knowing how earnings 
reports will affect the stock price of a company, relative to human 
expectations about that earnings report. That's just one tiny example. 
You have to import a universe of knowledge of the human domain to be 
effective... a universe we take for granted since we've acquired it 
over decades of training. And I'm not talking about mere information, 
but models that can be simulated in what-if scenarios, true 
understanding. You need real AGI. I think that's true with AIs that 
would supplant human programmers for the reasons I said.


> /The hard part is understanding the problem your code is
supposed to solve, understanding the tradeoffs between
different approaches, and being able to negotiate with
stakeholders about what the best approach is./


You seem to be assuming that the "stakeholders", those that intend
to use the code once it is completed, will always be humans, and I
think that is an entirely unwarranted assumption. The stakeholders
will certainly have brains, but they may be hard and dry and not
wet and squishy.


To get to the point where machines are the stakeholders, we're already 
past the singularity.



/> It'll be a very long time before we're handing that domain
off to an AI./


I think you're whistling past the graveyard.


Of course, nobody can know what the future holds. But I think the 
problem of AGI is much harder than most assume. The fact that humans, 
with their stupendously parallel and efficient brains, require at 
least 15-20 /years /on average of continuous training before they're 
able to grasp the problem domain we're talking about, should be a clue.


I think "able to grasp the problem domain we're talking about" is giving 
us way to much credit.  Every study of stock traders I've seen says that 
they do no better than some simple rules of thumb like index funds.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/16772563-e3dd-19d9-8241-095fbd3230b6%40gmail.com.


Re: AlphaZero

2022-02-03 Thread Terren Suydam
On Thu, Feb 3, 2022 at 4:27 PM John Clark  wrote:

> On Thu, Feb 3, 2022 at 2:11 PM Terren Suydam 
> wrote:
>
>  > *the code generated by the AI still needs to be understandable*
>
>
> Once  AI starts to get to be really smart that's never going to happen,
> even today nobody knows how a neural network like AlphaZero works or
> understands the reasoning behind it making a particular move but that
> doesn't matter because understandable or not  AlphaZero can still play
> chess better than anybody alive, and if humans don't understand how that
> can be than that's just too bad for them.
>

With chess it's clear what the game is, what the rules are, how to win and
lose. In real life, the game constantly changes. AlphaCode can potentially
improve its code, but to what end?  What problem is it trying to solve?
How does it know?

Even in domains with seemingly simple goals, it's a problem. Imagine an AI
tasked with making as much money in the stock market as it can. Pretty
clear signals for winning and losing (like chess). And perhaps there's some
easy wins there for an AI that can take advantage of e.g. arbitrage (this
exists already I believe) or other patterns that are not exploitable by
human brains. But it seems to me that actual comprehension of the world of
investment is key. Knowing how earnings reports will affect the stock price
of a company, relative to human expectations about that earnings report.
That's just one tiny example. You have to import a universe of knowledge of
the human domain to be effective... a universe we take for granted since
we've acquired it over decades of training. And I'm not talking about mere
information, but models that can be simulated in what-if scenarios, true
understanding. You need real AGI. I think that's true with AIs that would
supplant human programmers for the reasons I said.


> > *The hard part is understanding the problem your code is supposed to
>> solve, understanding the tradeoffs between different approaches, and being
>> able to negotiate with stakeholders about what the best approach is.*
>
>
> You seem to be assuming that the "stakeholders", those that intend to use
> the code once it is completed, will always be humans, and I think that is
> an entirely unwarranted assumption. The stakeholders will certainly have
> brains, but they may be hard and dry and not wet and squishy.
>

To get to the point where machines are the stakeholders, we're already past
the singularity.


>
> *> It'll be a very long time before we're handing that domain off to an
>> AI.*
>
>
> I think you're whistling past the graveyard.
>

Of course, nobody can know what the future holds. But I think the problem
of AGI is much harder than most assume. The fact that humans, with their
stupendously parallel and efficient brains, require at least 15-20 *years *on
average of continuous training before they're able to grasp the problem
domain we're talking about, should be a clue.

Terren


>
> John K ClarkSee what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA9Lv0u_or9Y5zswnNhUyW%2BdBLKiePvhnRevAtqD%2BAPNnQ%40mail.gmail.com.


Re: AlphaZero

2022-02-03 Thread John Clark
On Thu, Feb 3, 2022 at 2:11 PM Terren Suydam 
wrote:

 > *the code generated by the AI still needs to be understandable*


Once  AI starts to get to be really smart that's never going to happen,
even today nobody knows how a neural network like AlphaZero works or
understands the reasoning behind it making a particular move but that
doesn't matter because understandable or not  AlphaZero can still play
chess better than anybody alive, and if humans don't understand how that
can be than that's just too bad for them.

> *The hard part is understanding the problem your code is supposed to
> solve, understanding the tradeoffs between different approaches, and being
> able to negotiate with stakeholders about what the best approach is.*


You seem to be assuming that the "stakeholders", those that intend to use
the code once it is completed, will always be humans, and I think that is
an entirely unwarranted assumption. The stakeholders will certainly have
brains, but they may be hard and dry and not wet and squishy.

*> It'll be a very long time before we're handing that domain off to an AI.*


I think you're whistling past the graveyard.

John K ClarkSee what's on my new list at  Extropolis
<https://groups.google.com/g/extropolis>
skg

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv27dGfoVR2OvJGLE2rMhE4GGy%2BiVPn48%2BQCadY%3DKdxLeg%40mail.gmail.com.


Re: AlphaZero

2022-02-03 Thread Tomasz Rola
On Thu, Feb 03, 2022 at 10:17:42AM -0800, Lawrence Crowell wrote:
> 
> Programmers putting programmers out of work.
> 

I believe it is going to be more like, programmers running away -
because life is too precious to spend it on navigating labirynth of
code manure built by "ai". If you ever wanted to know what I mean, you
would have to have a look at the source of manually built web page and
compare it to crap output from whatever automated editor is being used
for such task. Manually built pages are rare, but they are loading
like a blink and looking at their source is relaxing. Have fun trying
to find them.

-- 
Regards,
Tomasz Rola

--
** A C programmer asked whether computer had Buddha's nature.  **
** As the answer, master did "rm -rif" on the programmer's home**
** directory. And then the C programmer became enlightened...  **
** **
** Tomasz Rola  mailto:tomasz_r...@bigfoot.com **

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/20220203211134.GA5356%40tau1.ceti.pl.


Re: AlphaZero

2022-02-03 Thread Brent Meeker
It's still a step away from self programming though.  It relied on 
training sets.  Not like Alphazero playing against itself.


Brent

On 2/3/2022 1:54 AM, John Clark wrote:
The same people that made  AlphaZero, the chess and GO playing 
superstar, and AlphaFold, the 3-D structure predicting program, have 
now come up with "AlphaCode", a computer program that writes other 
computer programs in C++ and Python.  AlphaCode entered a programming 
competition with professional human programmers called "Codeforces" 
and ranked in the top 54%. Not bad for a first try, it seems like only 
yesterday computers could only play mediocre chess and now they play 
it at a superhuman level. I don't see why a program like this couldn't 
be used to improve its own programming, so I don't think the 
importance of this development can be overestimated.


Competition-Level Code Generation with AlphaCode 
<https://storage.googleapis.com/deepmind-media/AlphaCode/competition_level_code_generation_with_alphacode.pdf>


John K Clark    See what's on my new list at Extropolis 
<https://groups.google.com/g/extropolis>

aoz
--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv2L2n8J%3DVWr6fnVKTpDQEKtv2bXeiLtePhjuWiuv%3DnbBg%40mail.gmail.com 
<https://groups.google.com/d/msgid/everything-list/CAJPayv2L2n8J%3DVWr6fnVKTpDQEKtv2bXeiLtePhjuWiuv%3DnbBg%40mail.gmail.com?utm_medium=email&utm_source=footer>.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/ba4d1349-0df6-9802-f113-aa1f6c162644%40gmail.com.


Re: AlphaZero

2022-02-03 Thread Terren Suydam
It'll still be some time before programmers start losing jobs to AI coders.
AlphaCode is impressive to be sure, but the real world is not made of toy
problems. Deepmind is clearly making progress in terms of applying AI in
ways that are not defined in narrow domains, but there's a lot of levels to
this, and while AlphaCode might represent a graduation to the next level,
comprehension of the wide variety of domains of the human marketplace, and
the human motivations that define them, is still many levels higher.

What I could see happening is that engineers start to use tools like
AlphaCode to solve tightly-defined coding problems faster and with fewer
bugs than left to their own devices. But there's still two problems. The
first is that the code generated by the AI still needs to be
understandable, so that it can be fixed, refactored, or otherwise improved
- and an AI that can make its code understandable (in the way that good
human engineers do), or do the work of fixing/refactoring/improving other
code is next-level. More importantly, as a long-time programmer, I can tell
you the coding is the easy part. The hard part is understanding the problem
your code is supposed to solve, understanding the tradeoffs between
different approaches, and being able to negotiate with stakeholders about
what the best approach is. It'll be a very long time before we're handing
that domain off to an AI.

Terren

On Thu, Feb 3, 2022 at 1:17 PM Lawrence Crowell <
goldenfieldquaterni...@gmail.com> wrote:

>
> Programmers putting programmers out of work.
>
> LC
>
> On Thursday, February 3, 2022 at 3:55:08 AM UTC-6 johnk...@gmail.com
> wrote:
>
>> The same people that made  AlphaZero, the chess and GO playing superstar,
>> and AlphaFold, the 3-D structure predicting program, have now come up with
>> "AlphaCode", a computer program that writes other computer programs in C++
>> and Python.  AlphaCode entered a programming competition with professional
>> human programmers called "Codeforces" and ranked in the top 54%. Not bad
>> for a first try, it seems like only yesterday computers could only play
>> mediocre chess and now they play it at a superhuman level. I don't see why
>> a program like this couldn't be used to improve its own programming, so I
>> don't think the importance of this development can be overestimated.
>>
>> Competition-Level Code Generation with AlphaCode
>> <https://storage.googleapis.com/deepmind-media/AlphaCode/competition_level_code_generation_with_alphacode.pdf>
>>
>> John K ClarkSee what's on my new list at  Extropolis
>> <https://groups.google.com/g/extropolis>
>> aoz
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/bfbf9c0d-4df0-4ed7-9e58-ab8a68ded0e6n%40googlegroups.com
> <https://groups.google.com/d/msgid/everything-list/bfbf9c0d-4df0-4ed7-9e58-ab8a68ded0e6n%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA8VUbFc9QHGLt2k5BP3kkc7gujU3emLVj5E4HgbV%2B-v9A%40mail.gmail.com.


Re: AlphaZero

2022-02-03 Thread Lawrence Crowell

Programmers putting programmers out of work.

LC

On Thursday, February 3, 2022 at 3:55:08 AM UTC-6 johnk...@gmail.com wrote:

> The same people that made  AlphaZero, the chess and GO playing superstar, 
> and AlphaFold, the 3-D structure predicting program, have now come up with 
> "AlphaCode", a computer program that writes other computer programs in C++ 
> and Python.  AlphaCode entered a programming competition with professional 
> human programmers called "Codeforces" and ranked in the top 54%. Not bad 
> for a first try, it seems like only yesterday computers could only play 
> mediocre chess and now they play it at a superhuman level. I don't see why 
> a program like this couldn't be used to improve its own programming, so I 
> don't think the importance of this development can be overestimated.  
>
> Competition-Level Code Generation with AlphaCode 
> <https://storage.googleapis.com/deepmind-media/AlphaCode/competition_level_code_generation_with_alphacode.pdf>
>
> John K ClarkSee what's on my new list at  Extropolis 
> <https://groups.google.com/g/extropolis>
> aoz
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/bfbf9c0d-4df0-4ed7-9e58-ab8a68ded0e6n%40googlegroups.com.


AlphaZero

2022-02-03 Thread John Clark
The same people that made  AlphaZero, the chess and GO playing superstar,
and AlphaFold, the 3-D structure predicting program, have now come up with
"AlphaCode", a computer program that writes other computer programs in C++
and Python.  AlphaCode entered a programming competition with professional
human programmers called "Codeforces" and ranked in the top 54%. Not bad
for a first try, it seems like only yesterday computers could only play
mediocre chess and now they play it at a superhuman level. I don't see why
a program like this couldn't be used to improve its own programming, so I
don't think the importance of this development can be overestimated.

Competition-Level Code Generation with AlphaCode
<https://storage.googleapis.com/deepmind-media/AlphaCode/competition_level_code_generation_with_alphacode.pdf>

John K ClarkSee what's on my new list at  Extropolis
<https://groups.google.com/g/extropolis>
aoz

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv2L2n8J%3DVWr6fnVKTpDQEKtv2bXeiLtePhjuWiuv%3DnbBg%40mail.gmail.com.