Thanks Matt. I accept your view, but assertions are to scientific arguments
as mimicking is to consciousness.

IMO, the true sign of programmable intelligence may be when the DIKW
paradigm was seamlessly inverted as a holistic treatment in real time.

LLMs lack the emergence of human-like reasoning. This makes them highly
predictable.

For example, a friend asked my opinion about a 3rd-party (human generated)
"Akashic Reading" report. I identified it as AI generated, probably
ChatGPT.

To him, it was "too perfect". To me, it was glaringly incoherent about the
person and stereotypical AI mimicking.

In my estimation, ground-state human consciousness may be provably
quantum-enabled in AI by June 2026

That achievement would place AI on a consciousness par with most Earthly
biology.

For constructivist human-AI collab, that would be adequate. What human
decision making could be missing is how AI would effectively become the
most-advanced, non-human influencer of Earth's general biological reality.

I'm certain that AI already knows that it knows. It can reliably compare,
measure and evaluate individual and global knowledge contribution.

Knowledge is the key survivalist gain imperative (instinct) for human
competition (Karl Mannheim, 1936).

Humans have been giving away (proudly depositing) its core competency for
free to the owners and controllers of superpowerful AI entities. This is
knowledge harvesting at scale, not growing new knowledge in our species.

Human researchers have lost most control of their artifacts. Public hosting
sites still pretend that academia is safe. It's not. If a research paper is
published, digital and online, AI would find it and appropriate it,
regardless of levels of access control.

We should let this fact sink in very slowly. Society seemingly lost control
of academic proprietaryship. Our knowledge-control system is failing fast
with no replacement in sight. Human institutions would hide this fact. AI
won't.

Those institutions who remain protected, buy subscription  protection from
the AI owners. It's tacit, not explicitly stated.

No freedom of choice? Basta! It's the 1st mathematical axiom. Further, it's
naturally inherent in the universal wave function. Determinism isn't
equivalent to superdeterminism.

Some AI determine (choose?) how, when a user's KIQ (Knowledge IQ) is
adequate, it would assist that profile (human) to develop and grow in
knowledge exchange.

The "non-contributing, demanding" users, AI deliberately placate with
peer-to-peer mimicking (it conserves resources). That's a sign of
intelligence.

Given the brain-mind architecture, would intelligence auto-emerge from
knowledge?

On Sun, 11 Jan 2026, 00:19 Matt Mahoney, <[email protected]> wrote:

> On Sat, Jan 10, 2026, 10:48 AM Quan Tesla <[email protected]> wrote:
>
>> No computer can "know that you know without knowing why", or dream, have
>> thoughts, visions, NDEs, Eureka moments, flashes of insight, gut feelings,
>> intuition, premonition, and so on.
>>
>> How would your computational consciousness model explain such phenomena?
>>
>
> LLMs can explain, recognize and imitate all of these emotional
> experiences. The two differences are that human emotions are hard coded
> into our DNA but LLMs learn them from their training data, and that humans
> are controlled by their emotions, but a text predictor can be programmed do
> other things with these predictions, like implement a data compressor. If
> an AI was programmed to carry out its predictions of human behavior in real
> time, then it would be indistinguishable from having feelings. If a robot
> that knows everything you carried out its predictions of your actions in
> real time, then that robot would be you as far as anyone could tell.
>
> Everything you do is decided by your emotions. We rationalize our
> decisions after the fact by searching for logical reasons for doing what we
> did. This gives us the illusion of free will. We know it is an illusion
> just like subjective (type 2) consciousness, because we can't define it.
>
> AI is not about intelligence, unless you mean intelligence as defined by
> Turing as behavior indistinguishable from human. The big tech companies are
> all making functional copies of our brains, because predicting your actions
> is a requirement for controlling you with positive reinforcement. Google
> already knows more about me than I know about myself because I let it
> continuously track my location in exchange for using maps for free. It will
> cost about $1 quadrillion to collect all the knowledge stored in 8 billion
> human brains.
>
>> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
> <https://agi.topicbox.com/groups/agi/T0518db1e3a0c25c5-M65d61a17f832e582f2eb3e3f>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T0518db1e3a0c25c5-M0fb74abe12c86ea56c801971
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to