On Thu, Feb 3, 2022 at 7:20 PM Terren Suydam <[email protected]>
wrote:

*>>> AlphaCode can potentially improve its code, but to what end?  What
>>> problem is it trying to solve?  How does it know?*
>>>
>>
>> >> I don't understand your questions
>>
>
> > *What part is confusing?*
>

I'll make you a deal, I'll tell you "what problem it is trying to solve" if
you first tell me how long a piece of string is. And if you don't wanna do
that just rephrase the question more clearly.


>> Yeah with a human that process takes many decades, but even today
>> computers can process many many times more information than a human can,
>> not surprising when you consider the fact that the signals inside a human
>> brain only travel about 100 miles an hour while the signals in a computer
>> travel close to the speed of light, 186,000 miles a second.
>>
>
> *> Much of our learning takes place via interactions with other humans,
> and those cannot be sped up.*
>

Sure it can be, an AI could have a detailed intellectual conversation with
1000 people at the same time, or a million, or a billion.

* > I'm not talking about facts and information,*
>

You may not be talking about facts and information but I sure as hell I am
because information is as close as you can get to the traditional idea of
the soul without entering the realm of religion or some other form
of idiocy.

> *> but about theories of mind, understanding human motivations, forming
> and testing hypotheses about how to get goals met by interacting with other
> humans, and other animals for that matter.*
>

If humans can do it then an AI can do it too because knowledge is just
highly computed information, and wisdom is just highly computed knowledge.

*> And I'm not talking about mere information, *
>

Mere information? Mere?!

*> but models that can be simulated in what-if scenarios, true
> understanding. You need real AGI.*
>

You need AI, AGI is just loquacious technobabble used to make things sound
more inscrutable.

*> We probably need to define what understanding/comprehension actually
> means if we're going to take this much further.*
>

I don't think that would help one bit because fundamentally definitions are
not important in language, examples are. After all, examples are where
lexicographers get the knowledge to write the definitions for their book.
So I'd say that "understanding" is the thing that Einstein had about
physics to a greater extent than anybody else of his generation.

*> Regardless, to operate in the free-form world of humans, an AI needs to
> be able to understand and react to a problem space that is constantly
> changing. Changing rules (implicit and explicit), players, goals, dynamics,
> etc.*
>

Well sure, but AIs have been able to do that for years, since the 1950's.

> *Is that possible to do without real understanding?*
>

No. If I can answer some questions and perform some tasks in a certain area
then I could be confident in saying have some "real understanding" in that
area of knowledge, and if you can answer  more questions and perform more
tasks in that area than I can then I would say you have an even greater
understanding than I do, and I don't care if your brain is wet and squishy
or dry and hard.

>> As I've mentioned before, the entire human genome is only 750 megabytes,
>> the new Mac operating system is about 20 times that size, and the genome
>> contains instructions to build an entire human body not just a brain, and
>> the genome is loaded with massive redundancy; so whatever the algorithm is
>> that the brain uses to extract information from the environment there is
>> simply no way it can be all that complicated.
>>
>
> >
> *The thing that makes intelligence intelligence is not simply extracting
> information from the environment.*
>

How do you figure that? If human intelligence doesn't come from the 750 MB
in our genome and it doesn't come from the environment then where does this
secret sauce come from? From an invisible man in the sky? If so then why
does He only give it to brains that are wet and squishy.


> >> Machines move so fast that at breakfast the singularity could look to
>> a human like it's a very long way off, but by lunchtime the singularity
>> could be ancient history.
>>
>
> *> Do you think the singularity can occur with an AI that doesn't have
> real understanding?*
>

Of course not! I have no objection to the term "real understanding", I only
object when the term is used in a silly way, such as when I accomplish
something in a certain field it demonstrates "real understanding" but even
though an AI can do things in that same field even better and faster than I
can it demonstrates nothing but a mindless reflex because its brain is dry
and hard and not wet and squishy.

 John K Clark    See what's on my new list at  Extropolis
<https://groups.google.com/g/extropolis>

wsq

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv2RpzX4Wniwx%3DJxy7LZoo2Us_CrxsxKU7xePAy06Pp4DQ%40mail.gmail.com.

Reply via email to