Verses AI published an article in the NY Times that criticizes and debunks 
generative AI, and proposes an alternative.  I agree with their criticism, but 
I don't know enough about the alternative to make any further comments.  If 
anybody has difficulty getting the following website, an excerpt without the 
graphics follows.

In any case, it confirms my basic point:   the technology based on LLMs is 
valuable for many purposes, especially translations between and among 
languages, natural and artificial.  But there is a huge amount of intelligence 
(by humans and other living things) that it cannot do.  Google and others 
supplement LLMs with different technologies.

The question about how much and what kind of other technology is an open 
question.  The reference below is a suggestion.

John
_______________________

https://medium.com/aimonks/verses-ai-announces-agi-breakthrough-invokes-open-ais-assist-clause-7e657bcbce60

In an unprecedented move by VERSES AI, today’s announcement of a breakthrough 
revealing a new path to AGI based on ‘natural’ rather
than ‘artificial’ intelligence, VERSES took out a full page ad in the NY Times 
with an open letter to the Board of Open AI appealing to their
stated mission “to build artificial general intelligence (AGI) that is safe and 
benefits all of humanity.”

Specifically, the appeal addresses a clause in the Open AI Board’s charter that 
states in pursuit of their mission to “to build artificial general
intelligence (AGI) that is safe and benefits all of humanity,” and the concerns 
about late stage AGI becoming a “competitive race without
time for adequate safety precautions. Therefore, if a value-aligned, 
safety-conscious project comes close to building AGI before we do, we
commit to stop competing with and start assisting this project.”

What Happened?
VERSES has achieved an AGI breakthrough within their alternative path to AGI 
that is Active Inference. And they are appealing to Open AI
“in the spirit of cooperation and in accordance with [their} charter.”

According to their press release today, “VERSES recently achieved a significant 
internal breakthrough in Active Inference that we believe
addresses the tractability problem of probabilistic AI. This advancement 
enables the design and deployment of adaptive, real-time Active
Inference agents at scale, matching and often surpassing the performance of 
state-of-the-art deep learning. These agents achieve superior
performance using orders of magnitude less input data and are optimized for 
energy efficiency, specifically designed for intelligent computing
on the edge, not just in the cloud.”

In a video published as part of the announcement today titled, “The Year in AI 
2023,” VERSES takes a look at the incredible journey of AI
acceleration over this past year and what it suggests about the current path 
from Artificial Narrow Intelligence (where we are now) to Artificial
General Intelligence — AGI (the holy grail of AI automation)… Noting that all 
of the major players of Deep Learning technology have publicly
acknowledged throughout the course of 2023 that “another breakthrough” is 
needed to get to AGI. For many months now, there has been
overwhelming consensus that machine learning/deep learning cannot achieve AGI. 
Sam Altman, Bill Gates, Yann LeCunn, Gary Marcus,
and many others have publicly stated so.

Just last month, Sam Altman declared at the Hawking Fellowship Award event at 
Cambridge University that “another breakthrough is needed”
in response to a question asking if LLMs are capable of achieving AGI.
[See graphic in article]

Even more concerning are the potential dangers of proceeding in the direction 
of machine intelligence, as evidenced by the “Godfather of AI”,
Geoffrey Hinton, creator of back propagation and the deep learning method, 
withdrawing from Google early this year over his own concerns
of the potential harm to humanity by continuing down the path he had dedicated 
half a century of his life to.

So What Are The Potential Dangers of Deep Learning Neural Nets?
The many problems that pose these potential dangers of continuing down the 
current path of generative AI, are compelling and quite serious.
· Black box problem
· Alignment problem
· Generalizability problem
· Halucination problem
· Centralization problem — one corporation owning the AI
· Clean data problem
· Energy consumption problem
· Data update problem
· Financial viability problem
· Guardrail problem
· Copyright problem
All Current AI Stems from This ‘Artificial’ DeepMind Path
[see graphics and much more of this article]
. . .
_ _ _ _ _ _ _ _ _ _
ARISBE: THE PEIRCE GATEWAY is now at 
https://cspeirce.com  and, just as well, at 
https://www.cspeirce.com .  It'll take a while to repair / update all the links!
► PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON 
PEIRCE-L to this message. PEIRCE-L posts should go to [email protected] . 
► To UNSUBSCRIBE, send a message NOT to PEIRCE-L but to [email protected] 
with UNSUBSCRIBE PEIRCE-L in the SUBJECT LINE of the message and nothing in the 
body.  More at https://list.iupui.edu/sympa/help/user-signoff.html .
► PEIRCE-L is owned by THE PEIRCE GROUP;  moderated by Gary Richmond;  and 
co-managed by him and Ben Udell.

Reply via email to