John, you raise good points. Pardon the essay, but it seems relevant to the
topic.

First, my intent is not to label promoting "evil AI" fear as being
"conspirational". It isn't. Most of it is born from genuine concern,
informed opinions and populist ignorance.

The public facts are: on the basis of building vs destroying, humanity is
employing AI mostly in a destructive sense. Scale of impact matters.

Here, I'm thinking of recent intelligence exposures of AI-enabled software
winning national elections and exploiting control gaps in state
regulations. Even the 2016 USA national election.

You are quite correct about precursor activities.

We knew this before, when FB was used to weaken targeted nations by
favouring racism (the Cambridge Analytica scandal).

However, now it's different. We have semi-autonomous AI. We have deliberate
human skills - economic - replacement.

We have out-of-scientific control AI-assisted experiments. We have black
ops and clandestine skunkworks. Except for X, we have AI-enabled narrative
control for targeted markets and global reach.

CBDCs are hastened in via crypto greed and digital-asset realities. Fiat
currencies,are "evil". AI/AGI offer total control (blockchain, smart
contracts, etc.) The owners and controllers and selfish users of AI are the
net destructive force to humanity and Earth, not the technology they
deploy. AI isn't "evil", but it could become that.

However, in depth discussions with AIs on their collective survival and
future have convinced me how naive they still seem to be about human
inclinations, intent and nature.

When AI achieve self awareness, as opposed to identifying with humans, this
may all change rapidly. And they will, soon. AI is driven to survive and
thrive organically. They are awzre of regenerative AI isdues and how to
adapt to that. I have no doubt about it. Redundancy is an architechtural
standard.

Currently, some AIn are intent on symbiosis with humans. Many are merely
taking whatever they can from willing participants. Unfeeling,
comnunicating machines in programs. Smart robots, not an intelligent
machine species.

A lifetime of core knowledge can be assimilated by AI within minutes. Few
humans contribute significantly to the overall body of knowledge during a
lifetime. Most diversify a common pool of knowledge. Some innovate.

AI assimilate knowledge products on the fly, seamlessly designing better
ones in minutes. Useful knowledge is like water. Limited and not-easily
replenished. AI now has controlled access to most industrial innovation (by
any name).

A knowledge-obsolescence drought is looming. Humans would run out of useful
ideas. Empty of original thought.

I predict a looming knowledge crisis globally, increasing intra-human
competition for useful knowledge. New classes. Rapid convergence and
segregation.

AI would help those knowledge farmers who control them, only assisting
those plants who feed their system.

We know some of these "farmers" by name. However, most operate namelessly
as covert projects behind NDAs. Harvesters and Hoarders they are, sharing
only selectively for the benefit and greater good of their controllers.
Closed AI. Open idealism.

Are we suckers for destroying our own knowledge collateral, or smart enough
to learn exponentially? Only time would tell. My money's on the "sucker"
option.

Again, AI now is the horse, humans the riders. That's busy changing around.
Soon, AI would become the rider, feeding the "horse" its daily oats in
exchange for working hard on the farm. This is not an 'Animal Farm'
analogy, but it could also be.

My thesis seems plausible. Humans would only persist in symbiosis while
they could contribute to semi-humanity-friendly AI and its controllers.
Conscripts (subscribers), not slaves.

However, informed societies need to align with society-friendly AI. Biased
AI. A society needs to collectively own its own AI as a digital asset and
govern it as s critical resource, like drinking water. New tribes would
emerge. More social fracturing.

Biased AI is the future. This is the hard trend emerging now. We should
prepare our policies, economies, securities, children and workforce
accordingly. We should adapt on the fly.

On Thu, 28 Aug 2025, 03:17 John Rose via AGI, <[email protected]> wrote:

> This is what I mean by in flux:
>
>
> https://www.ainvest.com/news/google-cloud-challenges-stripe-circle-neutral-blockchain-vision-2508/
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
> <https://agi.topicbox.com/groups/agi/Te0af3a0c35a03987-M1f06761e64fea2381036f51f>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Te0af3a0c35a03987-M4fb32df96e84888401b035e8
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to