On Sat, Mar 30, 2024, 11:13 AM Nanograte Knowledge Technologies <
[email protected]> wrote:

>
> I can see there's no serious interest here to take a fresh look at doable
> AGI. Best to then leave it there.
>

AI is a solved problem. It is nothing more than text prediction. We have
LLMs that pass the Turing test. If you can't tell if you are talking to a
human, then either it is conscious and has free will, or you don't.

I joined this list about 20 years ago when Ben Goertzel (OpenCog), Pei Wang
(NARS), YKY (Genifer), and Peter Voss (AIGO) were actively working on AGI
projects. But AGI is expensive. The
reason nobody on the list solved it is because it costs millions of dollars
to train a neural network to predict terabytes of text at $2 per GPU hour.

So yeah, I am interested in new approaches. It shouldn't require more
training data than a human processes in a lifetime to train human level AI.
That's about one GB of text. That is the approach I have been following
since I started the large text benchmark in 2006 that became the basis for
the Hutter prize.

Prediction measures intelligence. Compression measures prediction.


------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5c24d9444d9d9cda-Mf8493b1484cb84f9aac5e5e4
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to