It is an interesting paper. But even though it references Tononi's
integrated information theory, I don't think it says anything about
consciousness. It is just the name they gave to part of their model. They
refer to a "consciousness vector" as the concatenation of vectors
representing perceptions and short and long term memory, so really just a
state machine vector. They show that their model, which also includes
models of space and time, improves the task completion rate of robots
tested in natural language using LLMs. It also shows just how far advanced
China is in the AI race.

Any LLM that passes the Turing test is conscious as far as you can tell, as
long as you assume that humans are conscious too. But this proves that
there is nothing more to consciousness than text prediction. Good
prediction requires a model of the world, which can be learned given enough
text and computing power, but can also be sped up by hard coding some basic
knowledge about how objects move, as the paper shows.

If you are looking for answers to the mystery of phenomenal consciousness,
you need to define it first. The test should be appropriate for humans,
animals, and machines. Of course nobody does this (including the authors)
because there isn't a test. We define consciousness as the difference
between a human and a philosophical zombie. We define a zombie as exactly
like a human in every observable way, except that it lacks consciousness.
If you poke one, they will react like a human and say "ouch" even though
they don't experience pain.

But of course we are conscious, right? If I poke you in the eye, are you
going to tell me it didn't hurt? Then what is it?

What you actually have is a sensation of consciousness. It feels like
something to think or recall memories or solve problems. Likewise, qualia
is what perception feels like, and free will is what action feels like.
These feelings are usually a net positive, which motivates us to not lose
them by dying. This results in more offspring.

Feelings have a physical explanation that we know how to encode in
reinforcement learning algorithms. If you do X and that is followed by a
positive (negative) signal, then you are more (less) likely to do X again.


On Sat, Jun 15, 2024, 8:34 PM John Rose <[email protected]> wrote:

>
> For those of us pursuing consciousness-based AGI this is an interesting
> paper that gets more practical... LLM agent based but still v. interesting:
>
> https://arxiv.org/abs/2403.20097
>
>
> I meant to say that this is an exceptionally well-written paper just
> teeming with insightful research on this subject. It's definitely worth a
> read.
>
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
> <https://agi.topicbox.com/groups/agi/T32a7a90b43c665ae-M5c35f67aa947a63004e35e44>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T32a7a90b43c665ae-M6b99887dcd5633d89566be07
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to