Re: AlphaZero

2022-02-05 Thread Terren Suydam
On Sat, Feb 5, 2022 at 6:24 PM Brent Meeker  wrote:

>
> AlphaCode is not capable of reading code. It's a clever version of monkeys
> typing on typewriters until they bang out a Shakespeare play. Still counts
> as AI, but cannot be said to understand code.
>
>
> What does it mean "to read code"?  It can execute code, from github
> apparently so it must read code well enough to execute it.  What more does
> it need to read?  You say it cannot be said to understand code.  Can you
> specify what would show it understands the code?
>
>
The github code is used to train a neural network that maps natural
language to code, and this neural network is used to generate candidate
solutions based on the natural language of the problem description. If you
want to say that represents a form of understanding, ok. But I would still
push back that it could "read code".


> I think you mean it tells you story about how this part to does that and
> this other part does something else, etc.  But that's just catering to your
> weak human brain that can't just "see" that the code solves the problem.
> The problem is that there is no species independent meaning of "understand"
> except "make it work".  AlphaCode doesn't understand code like you do,
> because it doesn't think like you do and doesn't have the context you do.
>

There are times when I can read code and understand it, and times when I
can't. When I can understand it, I can reason about what it's doing; I can
find and fix bugs in it; I can potentially optimize it. I can see if this
code is useful for other situations. And yes, I can tell you a story about
what it's doing. AlphaCode is doing none of those things, because it's not
built to.


>
> Brent
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA82SS%2B6A0jPd%3Dm08KP4SGOMrkpx%3DKTjYx560dtkjkwi5w%40mail.gmail.com.


Re: AlphaZero

2022-02-05 Thread Terren Suydam
On Sat, Feb 5, 2022 at 5:18 PM John Clark  wrote:

> On Sat, Feb 5, 2022 at 3:51 PM Terren Suydam 
> wrote:
>
> * > I dug a little deeper into how AlphaCode works. It generates millions
>> of candidate solutions using a model trained on github code. It then
>> filters out 99% of those candidate solutions by running them against test
>> cases provided in the problem description and removing the ones that fail.
>> It then uses a different technique to whittle down the candidate solutions
>> from several thousand to just ten. *[...] AlphaCode is not capable of
>> reading code.
>>
>
> How on earth can it filter out 99% of the code because it is bad code if
> it cannot read code? Closer to home, how could somebody on this list tell
> the difference between a post they like and a post they didn't like if they
> couldn't read English?
>

Let's take a more accessible analogy. Let's say the problem description is:
"Here's a locked door. Devise a key to unlock the door."

A simplified analog to what Alphacode does is the following:

   - generate millions of different keys of various shapes and sizes.
   - for each key, try to unlock the door with it
   - if it doesn't work, toss it

In order to say that the key-generating AI understands keys and locks,
you'd have to believe that a strategy that involves creating millions of
guesses until one works entails some kind of understanding.

To your point that AlphaCode must have the ability to read code if it knows
how to toss incorrect candidates, that's like saying that the key-generator
must understand locks because it knows how to test if a key unlocks the
door.


>
>
>> * > Nobody, neither the AI nor the humans running AlphaCode, know if the
>> 10 solutions picked are correct.*
>>
>
> As Alan Turing said  "*If a machine is expected to be infallible, it
> cannot also be intelligent*."
>
> > It's a clever version of monkeys typing on typewriters until they bang
>> out a Shakespeare play. Still counts as AI,
>>
>
>   A clever version indeed!! In fact I would say that William Shakespeare
> himself was such a version.
>

If you think AlphaCode and Shakespeare have anything in common, then I
don't think your assertions about AI are worth much.


>
> *> Still counts as AI, but cannot be said to understand code.*
>
>
> I am a bit confused by your use of one word, you seem to be giving it a
> very unconventional meaning.  If you, being a human, "understand" code but
> the code you write is inferior to the code that an AI writes that doesn't
> "understand" code then I fail to see why any human or any machine would
> want to have an "understanding" of anything.
>

If you think a brute-force "generate a million guesses until one works"
strategy has the same understanding as an algorithm that employs a detailed
model of the domain and uses that model to generate a reasoned solution,
regardless of the results, then it's you that is employing the
unconventional meaning of "understand".

In the real world, you usually don't get to try something a million times
until something works.


>
> >> Just adding more input variables would be less complex than figuring
>>> out how to make a program smaller and faster.
>>>
>>
>> *> Think about it this way. There's diminishing returns on the strategy
>> to make the program smaller and faster, but potentially unlimited returns
>> on being able to respond to ever greater complexity in the problem
>> description.*
>>
>
> You're talking about what would be more useful, I was talking about what
> would be more complex. In general finding the smallest and fastest program
> that can accomplish a given task is infinitely complex, that is to say in
> general it's impossible to find the smallest program and prove it's the
> smallest program.  Code optimization is very far from a trivial problem.
>

I'm surprised you're focusing on the less useful direction to go in. If
anything, your thinking tends to be very pragmatic. Who cares if you can
squeeze a few extra milliseconds out of an algorithm, if you could instead
spend that effort doing something far more useful?


>
> John K ClarkSee what's on my new list at  Extropolis
> 
> tcp
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv2o1%2BL7E%3Da8nNFpu4vEVmuSJMfOHUGeJBhUsw7GEfbytw%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to 

Re: AlphaZero

2022-02-05 Thread Brent Meeker



On 2/5/2022 12:51 PM, Terren Suydam wrote:



On Fri, Feb 4, 2022 at 6:18 PM John Clark  wrote:

On Fri, Feb 4, 2022 at 5:34 PM Terren Suydam
 wrote:

>> Look at thiscode for a subprogram and make something that
does the same thing but issmaller or runsfaster or both.
And that's not a toy problem, that's a real problem.


> "does the same thing" is problematic for a couple reasons.
The first is that AlphaCode doesn't know how to read code,


Huh? We already know AlphaCode can write code, how can something
know how to write but not read? It's easier to read a novel than
write a novel.


This is one case where your intuitions fail. I dug a little deeper 
into how AlphaCode works. It generates millions of candidate solutions 
using a model trained on github code. It then filters out 99% of those 
candidate solutions by running them against test cases provided in the 
problem description and removing the ones that fail. It then uses a 
different technique to whittle down the candidate solutions from 
several thousand to just ten. Nobody, neither the AI nor the humans 
running AlphaCode, know if the 10 solutions picked are correct.


Just like we don't know which interpretation of quantum mechanics is 
correct.  But we use it anyway.


AlphaCode is not capable of reading code. It's a clever version of 
monkeys typing on typewriters until they bang out a Shakespeare play. 
Still counts as AI, but cannot be said to understand code.


What does it mean "to read code"?  It can execute code, from github 
apparently so it must read code well enough to execute it.  What more 
does it need to read?  You say it cannot be said to understand code.  
Can you specify what would show it understands the code?


I think you mean it tells you story about how this part to does that and 
this other part does something else, etc.  But that's just catering to 
your weak human brain that can't just "see" that the code solves the 
problem.  The problem is that there is no species independent meaning of 
"understand" except "make it work". AlphaCode doesn't understand code 
like you do, because it doesn't think like you do and doesn't have the 
context you do.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/fa53145a-65b6-b5a2-49ec-46151fc80365%40gmail.com.


Re: AlphaZero

2022-02-05 Thread John Clark
On Sat, Feb 5, 2022 at 3:51 PM Terren Suydam 
wrote:

* > I dug a little deeper into how AlphaCode works. It generates millions
> of candidate solutions using a model trained on github code. It then
> filters out 99% of those candidate solutions by running them against test
> cases provided in the problem description and removing the ones that fail.
> It then uses a different technique to whittle down the candidate solutions
> from several thousand to just ten. *[...] AlphaCode is not capable of
> reading code.
>

How on earth can it filter out 99% of the code because it is bad code if it
cannot read code? Closer to home, how could somebody on this list tell the
difference between a post they like and a post they didn't like if they
couldn't read English?


> * > Nobody, neither the AI nor the humans running AlphaCode, know if the
> 10 solutions picked are correct.*
>

As Alan Turing said  "*If a machine is expected to be infallible, it cannot
also be intelligent*."

> It's a clever version of monkeys typing on typewriters until they bang
> out a Shakespeare play. Still counts as AI,
>

  A clever version indeed!! In fact I would say that William Shakespeare
himself was such a version.

*> Still counts as AI, but cannot be said to understand code.*


I am a bit confused by your use of one word, you seem to be giving it a
very unconventional meaning.  If you, being a human, "understand" code but
the code you write is inferior to the code that an AI writes that doesn't
"understand" code then I fail to see why any human or any machine would
want to have an "understanding" of anything.

>> Just adding more input variables would be less complex than figuring out
>> how to make a program smaller and faster.
>>
>
> *> Think about it this way. There's diminishing returns on the strategy to
> make the program smaller and faster, but potentially unlimited returns on
> being able to respond to ever greater complexity in the problem
> description.*
>

You're talking about what would be more useful, I was talking about what
would be more complex. In general finding the smallest and fastest program
that can accomplish a given task is infinitely complex, that is to say in
general it's impossible to find the smallest program and prove it's the
smallest program.  Code optimization is very far from a trivial problem.

John K ClarkSee what's on my new list at  Extropolis

tcp

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv2o1%2BL7E%3Da8nNFpu4vEVmuSJMfOHUGeJBhUsw7GEfbytw%40mail.gmail.com.


Re: AlphaZero

2022-02-05 Thread Quentin Anciaux
The only thing I hope AI will achieve is to be less condescending... if it
achieves true understanding,  I hope it will be humble... and as far as
John Clark dislikes religions and God,  the singularity will be God...

Quentin

Le sam. 5 févr. 2022, 20:51, Terren Suydam  a
écrit :

>
>
> On Fri, Feb 4, 2022 at 6:18 PM John Clark  wrote:
>
>> On Fri, Feb 4, 2022 at 5:34 PM Terren Suydam 
>> wrote:
>>
>> >> Look at this code for a subprogram and make something that does the
 same thing but is smaller or runs faster or both. And that's not a toy
 problem, that's a real problem.

>>>
>>> > "does the same thing" is problematic for a couple reasons. The first
>>> is that AlphaCode doesn't know how to read code,
>>>
>>
>> Huh? We already know AlphaCode can write code, how can something know
>> how to write but not read? It's easier to read a novel than write a novel.
>>
>
> This is one case where your intuitions fail. I dug a little deeper into
> how AlphaCode works. It generates millions of candidate solutions using a
> model trained on github code. It then filters out 99% of those candidate
> solutions by running them against test cases provided in the problem
> description and removing the ones that fail. It then uses a different
> technique to whittle down the candidate solutions from several thousand to
> just ten. Nobody, neither the AI nor the humans running AlphaCode, know if
> the 10 solutions picked are correct.
>
> AlphaCode is not capable of reading code. It's a clever version of monkeys
> typing on typewriters until they bang out a Shakespeare play. Still counts
> as AI, but cannot be said to understand code.
>
>
>>
>>> *> The other problem is that with that problem description, it won't
>>> evolve except in the very narrow sense of improving its efficiency.*
>>>
>>
>> It seems to me the ability to write code that was smaller and faster than
>> anybody else is not "very narrow", a human could make a very good living
>> indeed from that talent.  And if I was the guy that signed his enormous
>> paycheck and somebody offered me a program that would do the same thing he
>> did I'd jump at it.
>>
>
> This actually already exists in the form of optimizing compilers - which
> are the programs that translate human-readable code like Java into assembly
> language that microprocessors use to manipulate data. Optimizing compilers
> can make human code more efficient. But these gains are only available in
> very well-understood and limited ways. To do what you're suggesting
> requires machine intelligence capable of understanding things in a much
> broader context.
>
>
>>
>>
>>> *> The kind of problem description that might actually lead to a
>>> singularity is something like "Look at this code and make something that
>>> can solve ever more complex problem descriptions". But my hunch there is
>>> that that problem description is too complex for it to recursively
>>> self-improve towards.*
>>>
>>
>> Just adding more input variables would be less complex than figuring out
>> how to make a program smaller and faster.
>>
>
> Think about it this way. There's diminishing returns on the strategy to
> make the program smaller and faster, but potentially unlimited returns on
> being able to respond to ever greater complexity in the problem
> description.
>
>
>>
>> >> I think if Steven Spielberg's movie had been called AGI instead of AI
 some people today would no longer like the acronym AGI because too many
 people would know exactly what it means and thus would lack that certain
 aura of erudition and mystery that they crave . Everybody knows what AI
 means, but only a small select cognoscenti know the meaning of AGI. A
 Classic case of jargon creep.

>>>
>>> >Do you really expect a discipline as technical as AI to not use
>>> jargon?
>>>
>>
>> When totally new concepts come up, as they do occasionally in science,
>> jargon is necessary because there is no previously existing word or short
>> phrase that describes it, but that is not the primary generator of
>> jargon and is not in this case  because a very short word that describes
>> the idea already exists and everybody already knows what AI means, but
>> very few know that AGI means the same thing. And some see that as AGI's
>> great virtue, it's mysterious and sounds brainy.
>>
>>
>>> *> You use physics jargon all the time.*
>>>
>>
>> I do try to keep that to a minimum, perhaps I should try harder.
>>
>
> I don't hold it against you, and I certainly don't think you're trying to
> cultivate an aura of erudition and mystery when you do. I'm not sure why
> you seem to have an axe to grind about the use of AGI, but it is a useful
> distinction to make. It's clear we have AI today. And it's equally clear we
> do not have AGI.
>
> Terren
>
>
>>
>> John K ClarkSee what's on my new list at  Extropolis
>> 
>> pjx
>>
>>
>>
>> --
>> You received this message because you are subscribed to the 

Re: AlphaZero

2022-02-05 Thread Terren Suydam
On Fri, Feb 4, 2022 at 6:18 PM John Clark  wrote:

> On Fri, Feb 4, 2022 at 5:34 PM Terren Suydam 
> wrote:
>
> >> Look at this code for a subprogram and make something that does the
>>> same thing but is smaller or runs faster or both. And that's not a toy
>>> problem, that's a real problem.
>>>
>>
>> > "does the same thing" is problematic for a couple reasons. The first
>> is that AlphaCode doesn't know how to read code,
>>
>
> Huh? We already know AlphaCode can write code, how can something know how
> to write but not read? It's easier to read a novel than write a novel.
>

This is one case where your intuitions fail. I dug a little deeper into how
AlphaCode works. It generates millions of candidate solutions using a model
trained on github code. It then filters out 99% of those candidate
solutions by running them against test cases provided in the problem
description and removing the ones that fail. It then uses a different
technique to whittle down the candidate solutions from several thousand to
just ten. Nobody, neither the AI nor the humans running AlphaCode, know if
the 10 solutions picked are correct.

AlphaCode is not capable of reading code. It's a clever version of monkeys
typing on typewriters until they bang out a Shakespeare play. Still counts
as AI, but cannot be said to understand code.


>
>> *> The other problem is that with that problem description, it won't
>> evolve except in the very narrow sense of improving its efficiency.*
>>
>
> It seems to me the ability to write code that was smaller and faster than
> anybody else is not "very narrow", a human could make a very good living
> indeed from that talent.  And if I was the guy that signed his enormous
> paycheck and somebody offered me a program that would do the same thing he
> did I'd jump at it.
>

This actually already exists in the form of optimizing compilers - which
are the programs that translate human-readable code like Java into assembly
language that microprocessors use to manipulate data. Optimizing compilers
can make human code more efficient. But these gains are only available in
very well-understood and limited ways. To do what you're suggesting
requires machine intelligence capable of understanding things in a much
broader context.


>
>
>> *> The kind of problem description that might actually lead to a
>> singularity is something like "Look at this code and make something that
>> can solve ever more complex problem descriptions". But my hunch there is
>> that that problem description is too complex for it to recursively
>> self-improve towards.*
>>
>
> Just adding more input variables would be less complex than figuring out
> how to make a program smaller and faster.
>

Think about it this way. There's diminishing returns on the strategy to
make the program smaller and faster, but potentially unlimited returns on
being able to respond to ever greater complexity in the problem
description.


>
> >> I think if Steven Spielberg's movie had been called AGI instead of AI
>>> some people today would no longer like the acronym AGI because too many
>>> people would know exactly what it means and thus would lack that certain
>>> aura of erudition and mystery that they crave . Everybody knows what AI
>>> means, but only a small select cognoscenti know the meaning of AGI. A
>>> Classic case of jargon creep.
>>>
>>
>> >Do you really expect a discipline as technical as AI to not use jargon?
>>
>
> When totally new concepts come up, as they do occasionally in science,
> jargon is necessary because there is no previously existing word or short
> phrase that describes it, but that is not the primary generator of jargon
> and is not in this case  because a very short word that describes the
> idea already exists and everybody already knows what AI means, but very
> few know that AGI means the same thing. And some see that as AGI's great
> virtue, it's mysterious and sounds brainy.
>
>
>> *> You use physics jargon all the time.*
>>
>
> I do try to keep that to a minimum, perhaps I should try harder.
>

I don't hold it against you, and I certainly don't think you're trying to
cultivate an aura of erudition and mystery when you do. I'm not sure why
you seem to have an axe to grind about the use of AGI, but it is a useful
distinction to make. It's clear we have AI today. And it's equally clear we
do not have AGI.

Terren


>
> John K ClarkSee what's on my new list at  Extropolis
> 
> pjx
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv0344k0h7t3EjYtgsW5-652P_qieSqyXCtOiAr9zAnmOQ%40mail.gmail.com
> 

RE: Understanding climate [was Re: AlphaZero]

2022-02-05 Thread spudboy100 via Everything List

Good point here from Thomaz. Because models are models and they're only as good 
as their input and sometimes not even it as much as that. Simply for safety 
sake though I would presume that some kind of massive climate inundation is 
possible some kind of cycle of drought and storm simply because the Earth 
produces this all on its own and pumping carbon into the atmosphere ain't doing 
anything any good. Simply for safety sake I would go with a presumption that 
climate innovation or drought is entirely possible perhaps shockingly so! 
Having said that I think we should go solar and wind big time as a precaution 
along with batteries of course because damn it those two sources really need 
it. We have the development of perovskite solar cells and improved batteries so 
we should put these into application.
On Friday, February 4, 2022 Tomasz Rola  
wrote:
On Fri, Feb 04, 2022 at 10:59:44AM -0800, Brent Meeker wrote:
> Well consider the example of climate. Nobody can grasp all factors
> in climate and their interactions. But we can model all of them in
> a global climate simulation.

Actually, as far as I can tell, we cannot. Or, you mean,
theoretically, sure, but in practice, I would say no, such model had
not been made yet.

If such model existed, it would give an answer about how/why the last
glaciation started and how/why it ended. But I understand such
answers were not given yet. Only speculations.

So, seems to me, whatever model we have, it cannot predict past
events. I would not bet any money on anything such model says about
other things.

> So climatologists+simulations "grasp the domain" even though humans
> can't.

Who are "climatologists"?

> Now suppose we want to extend these predictive climate models to
> include predictions about what humans will do in response. We don't
> know how humans will behave except in some general statistical
> terms.

And yet I can give you some good prediction. We use to do as small as
possible, for as long as possible. I predict this is not going to
change anytime soon. And not later either.

-- 
Regards,
Tomasz Rola

--
** A C programmer asked whether computer had Buddha's nature.      **
** As the answer, master did "rm -rif" on the programmer's home    **
** directory. And then the C programmer became enlightened...      **
**                                                                **
** Tomasz Rola          mailto:tomasz_r...@bigfoot.com            **

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/20220205001420.GA27521%40tau1.ceti.pl.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/803261940.111929.1644094175841%40mail.yahoo.com.


Re: Plastic skyscrapers? Plastic airplanes?

2022-02-05 Thread spudboy100 via Everything List

We can recycle all of this if we have enough electricity and heat because 
that's all it takes probably even for this miracle plastic product. The nice 
thing about this development is that using perovskite solar cells which will be 
sealed in this plastic we can generate enough heat and electricity to supply 
with current technology four times what we consume on a daily basis if you 
believe the report from last fall from Columbia and imperial universities? We 
can also make plastic from oceanic algae and this is already being done for 
some products in the world chiefly it's being produced in China now jackets and 
whatnot from algae.
On Friday, February 4, 2022 Lawrence Crowell  
wrote:
The biggest problem I might see is that plastic is a sort of growing pollution 
problem. Though that is mostly with single use disposable plastic items, 
something like this will increase the planetary load of synthetic chemicals on 
the environment.
LC

On Friday, February 4, 2022 at 5:52:21 AM UTC-6 johnk...@gmail.com wrote:

In yesterday's issue of the journal Nature there is a report of a new super 
strong plastic called 2DPA-1 that is 5 times more resistant to deformation than 
bulletproof glass and requires twice as much force to break as steel even 
though it only has 1/6 the density. This remarkable strength could be achieved 
because chemists found an easy way to polymerize molecules in just two 
dimensions, something that had been thought impossible, so they can cheaply 
make sheets of this stuff and then stack the sheets together to make it as 
thick as they want. As a bonus, unlike most plastics 2DPA-1 is impermeable to 
gases so it can also be used as a thin coating to protect other materials from 
oxidation.

Irreversible synthesis of an ultrastrong two-dimensional polymeric material

John K Clark    See what's on my new list at  Extropolispss


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/4a4903ff-acbb-4157-863b-08759d526c6fn%40googlegroups.com.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/1953322988.145012.1644093834301%40mail.yahoo.com.


RE: Plastic skyscrapers? Plastic airplanes?

2022-02-05 Thread spudboy100 via Everything List

Now remember before we all get carried away with this the researchers involved 
came up with a two-dimensional product so yes I could see laminated versions of 
this being the new new. Then we have to ask ourselves this question from what 
resource is this new plastic going to come from? There's no reason not to 
suppose that we couldn't get it from growing vast mats of algae as a basic 
resource although we have tons of co al and if you want to throw it 
agricultural and forest waste to make the magical plastic of our dreams. For me 
I can see using perovskite solar cells sealed in this magical strong plastic 
product because perovskite upon exposure to air tends to crumble. Sealed within 
the magic plastic it could last decades. Energy problem solved as well as 
environmental no need for any thanks!
On Friday, February 4, 2022 John Clark  wrote:
In yesterday's issue of the journal Nature there is a report of a new super 
strong plastic called 2DPA-1 that is 5 times more resistant to deformation than 
bulletproof glass and requires twice as much force to break as steel even 
though it only has 1/6 the density. This remarkable strength could be achieved 
because chemists found an easy way to polymerize molecules in just two 
dimensions, something that had been thought impossible, so they can cheaply 
make sheets of this stuff and then stack the sheets together to make it as 
thick as they want. As a bonus, unlike most plastics 2DPA-1 is impermeable to 
gases so it can also be used as a thin coating to protect other materials from 
oxidation.

Irreversible synthesis of an ultrastrong two-dimensional polymeric material

John K Clark    See what's on my new list at  Extropolispss

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv3bNoXMLszV3sL_cvBggs7MBN4MKdz%3DwhGHVPL%3DC2skUw%40mail.gmail.com.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/1602497847.136063.1644093610215%40mail.yahoo.com.


Re: AlphaZero

2022-02-05 Thread spudboy100 via Everything List

We are a new organism machine intelligence combined with human beings. 
Benefiting both subspecies. Where new species just consider us linked by Wi-Fi 
brain to brain so to speak. For practical reasons we just divvy up the entire 
solar output of the solar system they can get the electricity they need we get 
the electricity we need it's win-win like the double mint twins two mints and 
one

On Friday, February 4, 2022 Lawrence Crowell  
wrote:


All of this is coming like a tidal wave. A couple of years ago an AI was given 
data on the appearance of the sky over several decades. In particular to 
positions of planets were given. The system within a few days output not only 
the Copernican model but Kepler's laws. The time is coming in a couple of 
decades where if some human or humans do not figure out quantum gravitation 
some AI system will. Many other things are being turned over to AI and robots. 
It may not be too long before humans are obsolete.


LC


On Thursday, February 3, 2022 at 3:27:18 PM UTC-6 johnk...@gmail.com wrote:

On Thu, Feb 3, 2022 at 2:11 PM Terren Suydam  wrote:




 > the code generated by the AI still needs to be understandable


Once  AI starts to get to be really smart that's never going to happen, even 
today nobody knows how a neural network like AlphaZero works or understands the 
reasoning behind it making a particular move but that doesn't matter because 
understandable or not  AlphaZero can still play chess better than anybody 
alive, and if humans don't understand how that can be than that's just too bad 
for them.


> The hard part is understanding the problem your code is supposed to solve, 
> understanding the tradeoffs between different approaches, and being able to 
> negotiate with stakeholders about what the best approach is.


You seem to be assuming that the "stakeholders", those that intend to use the 
code once it is completed, will always be humans, and I think that is an 
entirely unwarranted assumption. The stakeholders will certainly have brains, 
but they may be hard and dry and not wet and squishy. 


> It'll be a very long time before we're handing that domain off to an AI.


I think you're whistling past the graveyard.  


John K Clark    See what's on my new list at  Extropolis
skg


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/everything-list/fbe23515-c727-4562-914f-cb635cc38da0n%40googlegroups.com
.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/1858276594.144274.1644093415320%40mail.yahoo.com.


https://www.quantamagazine.org/secrets-of-early-animal-evolution-revealed-by-chromosome-tectonics-20220202/?mc_cid=16f78f433a_eid=d2222ee1fe

2022-02-05 Thread Philip Benjamin
https://www.quantamagazine.org/secrets-of-early-animal-evolution-revealed-by-chromosome-tec
 tonics-20220202/
[Prof. Daniel Rokhsa]
 "Large blocks of genes conserved through hundreds of millions of years of 
evolution hint at how the first animal chromosomes came to be.   Blocks of 
linked genes can maintain their integrity and be tracked through evolution, new 
research has shown. The discovery is the foundation of what is being called 
genome tectonics .  Now, in a Daniel Rokhsa, a professor of biological 
sciences at the University of California, Berkeley, has tracked changes in 
chromosomes that occurred as much as 800 million years ago. They identified 29 
big blocks of genes that remained recognizable as they passed into three of the 
earliest subdivisions of multicellular animal life. Using those blocks as 
markers, the scientists deduced how the chromosomes fused and recombined as 
those early groups of animals became distinct".

   [Philip Benjamin]

  There is a logical fallacy of misrepresentation of facts here.  The claim 
of  evolution is supported by the false assumption of "trans-speciation" which 
never occurred or cited in the entire article.  Evolution here is a straw man 
argument. It is a thoughtless acceptance of what Darvin stand for. The facts 
observed are misrepresented. These are counterfactual fallacies that are given 
without any valid premises. In fact, the premise, and inferences are confused.

Philip Benjamin
CC. Prof. Daniel Rokhsa



-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/SJ0PR14MB52648CE0CCC5A28DDDB96E13A82A9%40SJ0PR14MB5264.namprd14.prod.outlook.com.


Re: AlphaZero

2022-02-05 Thread John Clark
On Fri, Feb 4, 2022 at 7:34 PM Tomasz Rola  wrote:

>
>
>
>
>
>
>
> *from the point of view of Earth-based observer, planets are just another
> light points on the night sky, only moving strangely. Such an observer has
> no way to say if, for example, Mars is really different, or even a planet,
> or if it is maybe some star which looks for a place to stick and stop
> forever. Only with telescope one can see that Mars is a disc, and disc
> changes with time, but periodically, etc etc. But objective observer has no
> way to say that Mars is a planet based on merely naked eye observations,*


That is not true, even the ancient Egyptian's knew there was something
special about Mercury, Venus, Mars, Jupiter and Saturn. Johannes Kepler
didn't know the planets could be resolved into discs and he didn't need to
know that to derive his 3 laws of planetary motion, and a computer wouldn't
need to know that either. Kepler didn't have a telescope but he did have
Tycho Brahe's excellent naked eye measurement of the movements of the
planets over many decades, especially detailed were those of Mars. Kepler
knew that the planets moved in a complicated path relative to the fixed
stars and then after a fixed amount of time that was different for each
planet the path repeated. Kepler spent years developing a very complicated
earth centered model with lots of epicycles that fit Tyco's data pretty
well, and a lesser scientist might've been satisfied with that but Kepler
was not. Kepler knew Tyco's data was very good and the discrepancy between
theory and observation was just too large to ignore, so reluctantly he
junked years of work and went back to square one.  After more years of work
he found a sun centered model and his three laws and that fit Tyco's data
almost exactly and was much simpler.

John K ClarkSee what's on my new list at  Extropolis

tbk

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv37XKNGmfX%2BV5X3%3DXJebx48yo3D6dU0dr%2B5_LC5YAy6OQ%40mail.gmail.com.