On Fri, Feb 4, 2022 at 6:18 PM John Clark <[email protected]> wrote:

> On Fri, Feb 4, 2022 at 5:34 PM Terren Suydam <[email protected]>
> wrote:
>
> >> Look at this code for a subprogram and make something that does the
>>> same thing but is smaller or runs faster or both. And that's not a toy
>>> problem, that's a real problem.
>>>
>>
>> > "does the same thing" is problematic for a couple reasons. The first
>> is that AlphaCode doesn't know how to read code,
>>
>
> Huh? We already know AlphaCode can write code, how can something know how
> to write but not read? It's easier to read a novel than write a novel.
>

This is one case where your intuitions fail. I dug a little deeper into how
AlphaCode works. It generates millions of candidate solutions using a model
trained on github code. It then filters out 99% of those candidate
solutions by running them against test cases provided in the problem
description and removing the ones that fail. It then uses a different
technique to whittle down the candidate solutions from several thousand to
just ten. Nobody, neither the AI nor the humans running AlphaCode, know if
the 10 solutions picked are correct.

AlphaCode is not capable of reading code. It's a clever version of monkeys
typing on typewriters until they bang out a Shakespeare play. Still counts
as AI, but cannot be said to understand code.


>
>> *> The other problem is that with that problem description, it won't
>> evolve except in the very narrow sense of improving its efficiency.*
>>
>
> It seems to me the ability to write code that was smaller and faster than
> anybody else is not "very narrow", a human could make a very good living
> indeed from that talent.  And if I was the guy that signed his enormous
> paycheck and somebody offered me a program that would do the same thing he
> did I'd jump at it.
>

This actually already exists in the form of optimizing compilers - which
are the programs that translate human-readable code like Java into assembly
language that microprocessors use to manipulate data. Optimizing compilers
can make human code more efficient. But these gains are only available in
very well-understood and limited ways. To do what you're suggesting
requires machine intelligence capable of understanding things in a much
broader context.


>
>
>> *> The kind of problem description that might actually lead to a
>> singularity is something like "Look at this code and make something that
>> can solve ever more complex problem descriptions". But my hunch there is
>> that that problem description is too complex for it to recursively
>> self-improve towards.*
>>
>
> Just adding more input variables would be less complex than figuring out
> how to make a program smaller and faster.
>

Think about it this way. There's diminishing returns on the strategy to
make the program smaller and faster, but potentially unlimited returns on
being able to respond to ever greater complexity in the problem
description.


>
> >> I think if Steven Spielberg's movie had been called AGI instead of AI
>>> some people today would no longer like the acronym AGI because too many
>>> people would know exactly what it means and thus would lack that certain
>>> aura of erudition and mystery that they crave . Everybody knows what AI
>>> means, but only a small select cognoscenti know the meaning of AGI. A
>>> Classic case of jargon creep.
>>>
>>
>> >Do you really expect a discipline as technical as AI to not use jargon?
>>
>
> When totally new concepts come up, as they do occasionally in science,
> jargon is necessary because there is no previously existing word or short
> phrase that describes it, but that is not the primary generator of jargon
> and is not in this case  because a very short word that describes the
> idea already exists and everybody already knows what AI means, but very
> few know that AGI means the same thing. And some see that as AGI's great
> virtue, it's mysterious and sounds brainy.
>
>
>> *> You use physics jargon all the time.*
>>
>
> I do try to keep that to a minimum, perhaps I should try harder.
>

I don't hold it against you, and I certainly don't think you're trying to
cultivate an aura of erudition and mystery when you do. I'm not sure why
you seem to have an axe to grind about the use of AGI, but it is a useful
distinction to make. It's clear we have AI today. And it's equally clear we
do not have AGI.

Terren


>
> John K Clark    See what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
> pjx
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv0344k0h7t3EjYtgsW5-652P_qieSqyXCtOiAr9zAnmOQ%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv0344k0h7t3EjYtgsW5-652P_qieSqyXCtOiAr9zAnmOQ%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA9AUnjQCeOSP1NVN6djp57oOJLQi6_Se03wuR3yjfEW9A%40mail.gmail.com.

Reply via email to