That paper is founded on a flawed understanding of intelligence --
specifically misrepresenting the rigorous theoretical work by Legg and
Hutter. The misunderstanding is evidenced in the following paragraph about
definitions of intelligence:

... Legg and Hutter[Leg08] propose a goal-oriented definition of artificial
general intelligence: Intelligence measures an agent’s ability to achieve
goals in a wide range of environments. However, this definition does not
necessarily capture the full spectrum of intelligence, as it excludes
passive or reactive systems that can perform complex tasks or answer
questions without any intrinsic motivation or goal. One could imagine as an
artificial general intelligence, a brilliant oracle, for example, that has
no agency or preferences, but can provide accurate and useful information
on any topic or domain.

An agent that answers questions has an implicit goal of answering
questions. The "brilliant oracle" has the goal of providing *accurate and
useful information on any topic or domain.*

This all fits within the Hutter's rigorous AIXI mathematics -- and is
indeed more like falling off a log for this theory than anything that can
be considered beyond it for a very simple reason:

AIXI has two components: An induction engine and a decision engine. The
induction engine has one job: To be an oracle for the decision engine.

So, all one has to do in order to degenerate AIXI to a "brilliant oracle"
is replace the decision engine with a human that wants answers.

The fact that the authors of this paper don't get this -- very well
established prior work in AGI -- disqualifies them.

Here's an old but still very current lecture by Legg describing the
taxonomy of "intelligence" he developed and its relationship to AGI
<https://youtu.be/0ghzG14dT-w?t=378>.


PS: I’ve noticed that the number of authors in an LLM paper seems to
increase the greater the social consequences – almost as though no one
wants to be caught out alone doing profound damage to the world.

Here's another paper where the authors fail to draw a distinction between
ethics and morality -- a distinction at the very foundation of the Western
canon.  These "researchers" are best thought of as a mob of religious
zealots living inside a space colony created by a prior culture, tearing
down its life support systems for being "immoral".

“The Capacity for Moral Self-Correction in Large Language Models”
<https://arxiv.org/abs/2302.07459>

Deep Ganguli, Amanda Askell, Nicholas Schiefer, Thomas I. Liao, Kamilė
Lukošiūtė, Anna Chen, Anna Goldie, Azalia Mirhoseini, Catherine Olsson,
Danny Hernandez, Dawn Drain, Dustin Li, Eli Tran-Johnson, Ethan Perez,
Jackson Kernion, Jamie Kerr, Jared Mueller, Joshua Landau, Kamal Ndousse,
Karina Nguyen, Liane Lovitt, Michael Sellitto, Nelson Elhage, Noemi
Mercado, Nova DasSarma, Oliver Rausch, Robert Lasenby, Robin Larson, Sam
Ringer, Sandipan Kundu, Saurav Kadavath, Scott Johnston, Shauna Kravec,
Sheer El Showk, Tamera Lanham, Timothy Telleen-Lawton, Tom Henighan,
Tristan Hume, Yuntao Bai, Zac Hatfield-Dodds, Ben Mann, Dario Amodei,
Nicholas Joseph, Sam McCandlish, Tom Brown, Christopher Olah, Jack Clark,
Samuel R. Bowman, Jared Kaplan

On Wed, Mar 22, 2023 at 11:58 PM <[email protected]> wrote:

> "Given the breadth and depth of GPT-4's capabilities, we believe that it
> could reasonably be viewed as an early (yet still incomplete) version of an
> artificial general intelligence (AGI) system."
>
> https://arxiv.org/abs/2303.12712
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
> <https://agi.topicbox.com/groups/agi/T208c0ae6f6f8ee11-Mb17fd307fd91e503d014a36a>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T208c0ae6f6f8ee11-M519c9d13c26121f5c954eef4
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to