On Wed, Oct 18, 2023 at 1:48 AM <[email protected]> wrote:

> On Wednesday, October 18, 2023, at 7:40 AM, Matt Mahoney wrote:
>
> It's not clear to me that there will be many AIs vs one AI as you claim.
> AIs can communicate with each other much faster than humans, so they would
> only appear distinct if they don't share information (like Google vs
> Facebook). Obviously it is better if they do share. Then each is as
> intelligent as they are collectively, like the way you and Google make each
> other smarter.
>
>
> You are talking about a species that collectively share information in a
> manner similar to telepathy
>

My understanding of communication between agents that don't share a common
model of reality (ie: their approximation of the algorithmic information of
their observations is not the same) is that they would need to engage in
something like Socratic Dialogues with each other so as to optimally
educate each other.  This is not telepathy.  It is constructing a pairwise
unique dialect for communicating observations and their algorithmic
encodings.  Matt can speak for himself and his AGI design, of course, but
this is how I view computer based education.  Now, having said that, this
presumes the utility function (ie: the loss function) of these AGI's is the
size of their respective models of reality, constrained by the
computational resources they have.  In other words, it presumes truth
seeking AGIs.  If other utility functions obtain, then all bets are off,
since deception and/or withholding of valuable information also obtains.

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Td02eb9a7e06e7b5e-M1b436b621f052c0b2ea5feb8
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to