On Wed, Oct 28, 2015 at 11:18 PM, Russell Standish <li...@hpcoders.com.au>
wrote:

> On Wed, Oct 28, 2015 at 09:11:34PM -0500, Jason Resch wrote:
> > At some level, an algorithm cannot be held responsible for its actions
> > because it was doing the only thing it could do, what it was programmed
> to
> > do. At some point between a simplistic algorithm and a human level AI,
> > however, we seem able to assigning responsibility/culpability. What does
> an
> > algorithm minimally have to have before it reaches this point?
> >
> > The ability to learn?
> > Understanding of the consequences of its actions?
> > Rights that it cares about?
> > Personhood?
> >
>
> None of those things are required to assign legal responsibility. For
> example, a company can be held legally responsible, but a company is
> not a person, nor is it conscious, nor need it learn (although
> companies can learn).
>
> I think all that is required is a sense of agency, that holding
> something responsible is sufficient to affect the actions of that
> agent.
>
> If a robot can process the notion of responsibility such that its
> actions will be affected by it, then yes it can be held responsible
> regardless of whether any conscious understanding exists.
>

What are the minimum requirements to program agency?

It seems to be, if a program cannot learn/alter itself, it cannot be held
responsible, for it is doing only what it was programmed to do.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to