On Tue, Jul 9, 2024 at 7:22 PM Brent Meeker <[email protected]> wrote:

>> *what I call "a Turing firewall", software has no ability to know its
>> underlying hardware implementation, it is an inviolable separation of
>> layers of abstraction, which makes the lower levels invisible to the layers
>> above.*
>
>
> *> **That's roughly true, but not exactly.  If you think of intelligence
> implemented on a computer it would make a difference if it had a true
> random number generator (hardware) or not. *


For most problems a software pseudo random number generator is good enough,
but I admit that on some problems you might need to stick on a hardware
true random number generator, however unless it was specifically told, I
don't think an AI would be able to intuitively tell if it had a pseudo
random number generator or a real one.

*> It would make a difference if it were a quantum computer or not.  *


Any function that is calculable can be computed by a Turing Machine, and
although it's never been formally proven to be true,  most think there is
no problem that a Turing Machine can NOT compute (like finding Busy Beaver
Numbers) that a quantum computer CAN. However there are lots of problems
that would be easy for a Quantum Computer to solve even if it only had a
hundred high quality Qubits that would be impractical for a conventional
computer the size of Jupiter to solve even if it had 1 trillion years to
work on it. And I doubt that an AI could intuitively tell if its inner
machinery was using quantum computing principles or not. Incidentally Ray
Kurzweil is skeptical that quantum computers will ever be practical, all
his predictions are based on the assumption that they will never amount to
much. If he's wrong about that then all his predictions will prove to be
much too conservative.

*> And going the other way, what if it didn't have a multiply operation. *


That would be no problem as long as the AI still had the addition
operation, just do repeated additions, although it would slow things down.
But you could start removing more and more operations until you got all the
way down to First Order Logic, and then an AI could actually prove its own
consistency. Kurt Godel showed that a few years before he came up with this
famous incompleteness theorem  in what we now call Godel's Completeness
Theorem. His later Incompleteness Theorem only applies to logical systems
powerful enough to do arithmetic, and you can't do arithmetic with nothing
but first order logic. The trouble is you couldn't really say an Artificial
Intelligence was intelligent if it couldn't even pass a first grade
arithmetic test.
 See what's on my new list at  Extropolis
<https://groups.google.com/g/extropolis>
wo1

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv0LYBDFEJZ-w%2Byot1TtZQRwkzQgyPO2-M0e%2BjhYQviX8A%40mail.gmail.com.

Reply via email to