On Fri, Feb 4, 2022 at 4:47 PM John Clark <[email protected]> wrote:

> On Fri, Feb 4, 2022 at 12:36 PM Terren Suydam <[email protected]>
> wrote:
>
> >> I'll make you a deal, I'll tell you "what problem it is trying to
>>> solve" if you first tell me how long a piece of string is. And if you don't
>>> wanna do that just rephrase the question more clearly.
>>>
>>
>> *> lol ok. The worry you're articulating is that AlphaCode will turn its
>> coding abilities on itself and improve its own code, and that this could
>> lead to the singularity. First, it must be said that AlphaCode is a tool
>> with no agency of its own.*
>>
>
> We're talking about fundamentals here and in that context I don't know
> what you mean by "agency". Any information processing mechanism can be
> reduced logically to a Turing Machine, and some machines will stop and
> produce an answer and some will never stop, and some Turing machines will
> produce a correct answer and some will not, and in general there's no way
> to know what a Turing machine is going to do, you just have to watch it and
> see and you might be waiting forever for it to stop and produce an answer.
>
>
> *> Left to its own devices, it will do... nothing.*
>
>
> There's no way you could know that. Even if you knew the exact state a
> huge neural net like AlphaZero was in, which is very unlikely, there is no
> way you could predict which state it would evolve into unless you could
> play chess as well as it can, which you cannot. In general the only way to
> know what a large neural network (which can always be logically reduced to
> a Turing Machine) will do is to just watch it and see, there is no
> shortcut. For a long time it might look like it's doing nothing and then
> suddenly start doing something, and that something might be something you
> don't like.
>
>
Have you ever written a program?  Because you talk like someone who gets
theoretical computation concepts but has not actually ever coded anything.


>
> *> But let's say the DeepMind team wanted to improve AlphaCode by applying
>> AlphaCode to itself. My question to you is, what is the "toy problem" they
>> would feed to AlphaCode? How do you define that problem? *
>>
>
> Look at this code for a subprogram and make something that does the same
> thing but is smaller or runs faster or both. And that's not a toy
> problem, that's a real problem.
>

"does the same thing" is problematic for a couple reasons. The first is
that AlphaCode doesn't know how to read code, but let's say that it could.
The other problem is that with that problem description, it won't evolve
except in the very narrow sense of improving its efficiency. The kind of
problem description that might actually lead to a singularity is something
like "Look at this code and make something that can solve ever more complex
problem descriptions". But my hunch there is that *that* problem
description is too complex for it to recursively self-improve towards.


>  >> an AI could have a detailed intellectual conversation with 1000
>>> people at the same time, or a million, or a billion.
>>>
>>
>> *> Sure, but those interactions still take time, perhaps days or even
>> months. And you're assuming that many people will want to have
>> conversations with an AI.*
>>
>
> Yes, I am assuming that, and I think it's a very reasonable assumption. If
> an intelligent AI thinks she could learn important stuff from talking to
> people it can simply turn up its charm variable so that people want to talk
> to her (or him). I suggest you take a look at the movie "Her" which covers
> the exact theme I'm talking about, a charismatic and brilliant AI having
> interesting and intimate conversations with thousands of people at exactly
> the same time. I think it's one of the best science-fiction movies ever
> made even though some say it has a depressing ending. I disagree, I didn't
> find it depressing at all.
>
> Her <https://en.wikipedia.org/wiki/Her_(film)>
>
> *>Have you ever tried listening to a 6 year old try and tell a story? *
>>
>
> Have you ever listen to a genius tell a story?
>
>
You're already at the singularity if it can be charming and brilliant to
millions of people simultaneously. I thought we were talking about getting
to the singularity.


>
>
>> >> If humans can do it then an AI can do it too because knowledge is
>>> just highly computed information, and wisdom is just highly computed
>>> knowledge.
>>>
>>
>> *> Sure, I can hand-wave things away too. "Highly computed" means what
>> exactly?*
>>
>
> It exactly means that a high number of FLOPS are necessary but not
> sufficient.
>
> > *I can reverse every word in this post. If I did that a million times
>> in a row it would be "highly computed" but it wouldn't result in knowledge,
>> much less wisdom.*
>>
>
> Obviously the computation must be done intelligently. I've had debates of
> this sort before and at this point it is traditional for my opponent to
> demand that I define "intelligently", and I will be happy to do so if you
> first define "define", and then define "define "define"" and then...
>

Don't worry, I won't ask you to do that. And I acknowledge that AIs will
eventually gain knowledge and wisdom. I just don't think it's as easy as
you're making it sound.


>
>
>> *> And I'm not talking about mere information, *
>>>>
>>>
>>> >> Mere information? Mere?!
>>>
>>
>> > As opposed to knowledge, wisdom, the ability to model aspects of the
>> world and simulate them, the ability to explain things, etc.
>>
>
> How do you expect to be able to do any of this without processing
> information?!
>

Where in the world did you get the idea that I think processing information
isn't necessary?


>
> >>You need AI, AGI is just loquacious technobabble used to make things
>>> sound more inscrutable.
>>>
>>
>> *> Doesn't seem all that loquacious to me. AGI just adds the word
>> "general",*
>>
>
> I think if Steven Spielberg's movie had been called AGI instead of AI some
> people today would no longer like the acronym AGI because too many people
> would know exactly what it means and thus would lack that certain aura of
> erudition and mystery that they crave . Everybody knows what AI means, but
> only a small select cognoscenti know the meaning of AGI. A Classic case of
> jargon creep.
>

Do you really expect a discipline as technical as AI to not use jargon?
You use physics jargon all the time.


>
>
>> *> to highlight the fact that today's AI isn't able to apply its
>> intelligence to anything but narrow domains.*
>>
>
> Even human geniuses have rather narrow domains, Einstein loved the violin
> but was only a mediocre player.
>

The fact that perhaps the greatest theoretical physicist of all time also
played the violin I think proves the point that humans are generalists.
Humans may only *enjoy* narrow domains but most are capable of some degree
of competency in any domain if they give it time and attention.


>
>
> *>>> We probably need to define what understanding/comprehension actually
>>>> means if we're going to take this much further.*
>>>>
>>>
>>> >> I don't think that would help one bit because fundamentally
>>> definitions are not important in language, examples are. After all,
>>> examples are where lexicographers get the knowledge to write the
>>> definitions for their book. So I'd say that "understanding" is the thing
>>> that Einstein had about physics to a greater extent than anybody else
>>> of his generation.
>>>
>>
>> *> Sure, that works for me. Einstein was able to predict and explain
>> things that nobody before him was able to. Prediction and explanation are
>> hallmarks of understand*ing.
>>
>
> I agree. And the only way we can tell if somebody else has a greater
> understanding than we do is to see if they can answer questions or do
> things that we cannot.
>
>
> *>>>  to operate in the free-form world of humans, an AI needs to be able
>>>> to understand and react to a problem space that is constantly changing.
>>>> Changing rules (implicit and explicit), players, goals, dynamics, etc.*
>>>
>>>
>>> >> Well sure, but AIs have been able to do that for years, since the
>>> 1950's.
>>>
>>
>>
>> *> Care to give an example of AI in the 1950s that could do that?*
>>
>
> A tic-tac-toe board is constantly changing, but a computer in the 1950s
> could play that game perfectly. And a computer in the late 1950's or early
> 60's could play checkers well enough to beat most children and adult
> novice players.
>

I was pretty clearly talking about a problem space in which there are
"changing rules (implicit and explicit), players, goals, dynamics, etc.".
Are you really suggesting that a 1950s AI that can play tic tac toe is
reacting to changing rules, goals, players, dynamics, etc?


>  I think the larger point is that for a super intelligent AI humans and
> their interactions will not be at the top of its priority list, a
> superhuman AI will have bigger fish to fry than us; and even if the
> singularity doesn't happen for 1000 years (and I can't imagine why it would
> take that long) in 999 years it will still seem like it's a long way off,
> but more progress will be made in that last year than the previous 999
> combined. So whenever the singularity occurs it will come as a big surprise
> to most.
>

I don't disagree, but this started as me saying that programmer jobs are
safe for the time being... that there's much more progress to be made in AI
before that happens, because the world of human interaction is much more
vast and complex than most people acknowledge. There's a bias there that
because as human adults we're all relatively competent in that domain, it
can't be that hard.

Terren


>
>  John K Clark    See what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
>

On Fri, Feb 4, 2022 at 4:47 PM John Clark <[email protected]> wrote:

> On Fri, Feb 4, 2022 at 12:36 PM Terren Suydam <[email protected]>
> wrote:
>
> >> I'll make you a deal, I'll tell you "what problem it is trying to
>>> solve" if you first tell me how long a piece of string is. And if you don't
>>> wanna do that just rephrase the question more clearly.
>>>
>>
>> *> lol ok. The worry you're articulating is that AlphaCode will turn its
>> coding abilities on itself and improve its own code, and that this could
>> lead to the singularity. First, it must be said that AlphaCode is a tool
>> with no agency of its own.*
>>
>
> We're talking about fundamentals here and in that context I don't know
> what you mean by "agency". Any information processing mechanism can be
> reduced logically to a Turing Machine, and some machines will stop and
> produce an answer and some will never stop, and some Turing machines will
> produce a correct answer and some will not, and in general there's no way
> to know what a Turing machine is going to do, you just have to watch it and
> see and you might be waiting forever for it to stop and produce an answer.
>
>
> *> Left to its own devices, it will do... nothing.*
>
>
> There's no way you could know that. Even if you knew the exact state a
> huge neural net like AlphaZero was in, which is very unlikely, there is no
> way you could predict which state it would evolve into unless you could
> play chess as well as it can, which you cannot. In general the only way to
> know what a large neural network (which can always be logically reduced to
> a Turing Machine) will do is to just watch it and see, there is no
> shortcut. For a long time it might look like it's doing nothing and then
> suddenly start doing something, and that something might be something you
> don't like.
>
>
> *> But let's say the DeepMind team wanted to improve AlphaCode by applying
>> AlphaCode to itself. My question to you is, what is the "toy problem" they
>> would feed to AlphaCode? How do you define that problem? *
>>
>
> Look at this code for a subprogram and make something that does the same
> thing but is smaller or runs faster or both. And that's not a toy
> problem, that's a real problem.
>
>  >> an AI could have a detailed intellectual conversation with 1000
>>> people at the same time, or a million, or a billion.
>>>
>>
>> *> Sure, but those interactions still take time, perhaps days or even
>> months. And you're assuming that many people will want to have
>> conversations with an AI.*
>>
>
> Yes, I am assuming that, and I think it's a very reasonable assumption. If
> an intelligent AI thinks she could learn important stuff from talking to
> people it can simply turn up its charm variable so that people want to talk
> to her (or him). I suggest you take a look at the movie "Her" which covers
> the exact theme I'm talking about, a charismatic and brilliant AI having
> interesting and intimate conversations with thousands of people at exactly
> the same time. I think it's one of the best science-fiction movies ever
> made even though some say it has a depressing ending. I disagree, I didn't
> find it depressing at all.
>
> Her <https://en.wikipedia.org/wiki/Her_(film)>
>
> *>Have you ever tried listening to a 6 year old try and tell a story? *
>>
>
> Have you ever listen to a genius tell a story?
>
>
>
>> >> If humans can do it then an AI can do it too because knowledge is
>>> just highly computed information, and wisdom is just highly computed
>>> knowledge.
>>>
>>
>> *> Sure, I can hand-wave things away too. "Highly computed" means what
>> exactly?*
>>
>
> It exactly means that a high number of FLOPS are necessary but not
> sufficient.
>
> > *I can reverse every word in this post. If I did that a million times
>> in a row it would be "highly computed" but it wouldn't result in knowledge,
>> much less wisdom.*
>>
>
> Obviously the computation must be done intelligently. I've had debates of
> this sort before and at this point it is traditional for my opponent to
> demand that I define "intelligently", and I will be happy to do so if you
> first define "define", and then define "define "define"" and then...
>
>
>> *> And I'm not talking about mere information, *
>>>>
>>>
>>> >> Mere information? Mere?!
>>>
>>
>> > As opposed to knowledge, wisdom, the ability to model aspects of the
>> world and simulate them, the ability to explain things, etc.
>>
>
> How do you expect to be able to do any of this without processing
> information?!
>
> >>You need AI, AGI is just loquacious technobabble used to make things
>>> sound more inscrutable.
>>>
>>
>> *> Doesn't seem all that loquacious to me. AGI just adds the word
>> "general",*
>>
>
> I think if Steven Spielberg's movie had been called AGI instead of AI some
> people today would no longer like the acronym AGI because too many people
> would know exactly what it means and thus would lack that certain aura of
> erudition and mystery that they crave . Everybody knows what AI means, but
> only a small select cognoscenti know the meaning of AGI. A Classic case of
> jargon creep.
>
>
>> *> to highlight the fact that today's AI isn't able to apply its
>> intelligence to anything but narrow domains.*
>>
>
> Even human geniuses have rather narrow domains, Einstein loved the violin
> but was only a mediocre player.
>
> *>>> We probably need to define what understanding/comprehension actually
>>>> means if we're going to take this much further.*
>>>>
>>>
>>> >> I don't think that would help one bit because fundamentally
>>> definitions are not important in language, examples are. After all,
>>> examples are where lexicographers get the knowledge to write the
>>> definitions for their book. So I'd say that "understanding" is the thing
>>> that Einstein had about physics to a greater extent than anybody else
>>> of his generation.
>>>
>>
>> *> Sure, that works for me. Einstein was able to predict and explain
>> things that nobody before him was able to. Prediction and explanation are
>> hallmarks of understand*ing.
>>
>
> I agree. And the only way we can tell if somebody else has a greater
> understanding than we do is to see if they can answer questions or do
> things that we cannot.
>
>
> *>>>  to operate in the free-form world of humans, an AI needs to be able
>>>> to understand and react to a problem space that is constantly changing.
>>>> Changing rules (implicit and explicit), players, goals, dynamics, etc.*
>>>
>>>
>>> >> Well sure, but AIs have been able to do that for years, since the
>>> 1950's.
>>>
>>
>>
>> *> Care to give an example of AI in the 1950s that could do that?*
>>
>
> A tic-tac-toe board is constantly changing, but a computer in the 1950s
> could play that game perfectly. And a computer in the late 1950's or early
> 60's could play checkers well enough to beat most children and adult
> novice players.
>
>
>
>> *> The point I'm making is that intelligence, operationally speaking, is
>> about far more than simply extracting information from the environment.*
>>
>
> I agree, after the extraction of information from the environment it must
> be extensively processed, first into knowledge and then into wisdom.
>
>
>> *> It's about making models of the world that can be used for prediction,
>> explanation, making plans, coordinating, etc.*
>>
>
> Yes, as I said,  the information must be extensively processed.
>
> *> Information extraction is necessary but not sufficient for
>> intelligence.*
>>
>
> Agreed.
>
> *> To the larger point, where I think we disagree is how easy it is for an
>> AI to achieve real understanding of the real world of human interaction.*
>>
>
> I think the larger point is that for a super intelligent AI humans and
> their interactions will not be at the top of its priority list, a
> superhuman AI will have bigger fish to fry than us; and even if the
> singularity doesn't happen for 1000 years (and I can't imagine why it would
> take that long) in 999 years it will still seem like it's a long way off,
> but more progress will be made in that last year than the previous 999
> combined. So whenever the singularity occurs it will come as a big surprise
> to most.
>
>  John K Clark    See what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
> sxe
>
>> pvq
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv2HMKbJCm5VYR%3D-dfWPQFVuPQNfi3KT9Ba9BWtrz8%3Dwvg%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv2HMKbJCm5VYR%3D-dfWPQFVuPQNfi3KT9Ba9BWtrz8%3Dwvg%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA8Yn9qBCsc%3DzFM%2BnAZt26ywrYSVJRgeO5f_N8LHxBvyQA%40mail.gmail.com.

Reply via email to