About what test you would propose for AGI or ASI.

On Sun, Jul 23, 2023, 11:25 PM <[email protected]> wrote:

> Hmm....Let me think.
> I know of 4 ways that come to mind how to test for AGI:
>
> Prediction Score, such as Perplexity or Lossless Compression.
> If it sounds and draws and sees etc like us.
> How much money it makes.
> How many clones it makes and how much it can repair itself.
>

1. We don't know if any of the compressors in my LTCB benchmark predict
text at human level. In 1950 Shannon estimated the entropy of written
English by having people guess the next character. With 100 bytes of
context, people were correct 80% on the first guess, 7% on the second, and
down from there. This bounds the entropy to 0.6 to 1.3 bits per character.
That is still the best measurement we have. The top entry on LTCB is 106 MB
or 0.85 bits per character. To know if this is human level, we would have
to modify the compressor to output its predictions to see if the first
guess is over 80%. Even if it is, ChatGPT is better.

2. Yes. Human skills are required.

3. AI has no property rights, and we better not give them any. They are not
people, and it should be illegal to program one to claim to be human or
have feelings. But if you count the earnings for its owners, world GDP is
about 100 times higher than it was in 1800, thanks to machines doing 99% of
the work.

4. Reproduction is a requirement for life, not for intelligence. Not unless
you want to measure reproductive fitness as intelligence like evolution
does. Is gray goo our goal?


------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T91a503beb3f94ef7-M5bab87df84d9b925b8ae7e8b
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to