On 10/1/07, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote:
> On Monday 01 October 2007 11:34:09 am, Richard Loosemore wrote:
> > Right, now consider the nature of the design I propose:  the
> > motivational system never has an opportunity for a point failure:
> > everything that happens is multiply-constrained (and on a massive scale:
> >   far more than is the case even in our own brains).  Once the system is
> > set up to behave according to a diffuse set of checks and balances (tens
> > of thousands of ideas about what is "right", rather than one single
> > directive), it can never wander far from that set of constraints without
> > noticing the departure immediately.
>
> That's essentially what I've been proposing, although the form appears
> different. Namely, design AIs so that they form a society, with the same kind
> of ability to monitor and police each other that human societies give us.
> This has the advantage that if enough of us do it with our AIs, it won't
> matter (at least it won't be catastrophic) that some other people create
> psychopathic ones.
>
> It's clearly valuable to think about how to do this inside a single system,
> but crucial to make sure we can do it for the population of all AIs as a
> whole.

In my opinion, this is the only workable approach, not to be confused
with a workable solution.

- Jef

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=48564784-73bb6c

Reply via email to