Re: Fwd: Responsibility and Personhood

2015-10-30 Thread Russell Standish
On Fri, Oct 30, 2015 at 03:28:09PM -0400, John Mikes wrote:
> You wrote:
> 
> 
> 
> 
> *"Then you have not met an algorithm whose output is directly influencedby
> the environment. Most robots are agents in this sense. If the agents
> areprocessing and reacting to rules, then those agents can be punishedfor
> breaking the rules." *
> 
> As I understand: a 'robot' is not an 'algorithm', I still cannot imagine a
> set of (math?) rules with an OUTPUT.

A robot is an embodied algorithm.

Maybe your imagination is somehow not scaling to the complexities of
things like machine learning algorithms. I'm not saying learning is
necessary for responsibility to apply - an algorithm that is
programmed to follow a database of laws, may be made to be legally
responsible if a) it has control of resources that can be used to
mitigate the results of the transgression, or b) may be legally
required to have it algorithm fixed, or legal database updated.

> Maybe I do not understand what an algorithm is? what kind of an "AGENT" it
> is? what does if ACT? as I think, algorithms have to be followed by an
> agent.
> 

Not all algorithms are agents, but agents can be (instantiated)
algorithms. If computationalism is true, then all agents are algorithms.

There are wikipedia resources, etc that explain the concept of agent
better than I can do, but it is a conglomeration of a number of
concepts, such as belief, intention and desire.

Cheers

> And thanks for your wisdom on 'personhood'.
> 
> JM
> 
> On Thu, Oct 29, 2015 at 6:21 PM, Russell Standish 
> wrote:
> 
> > On Thu, Oct 29, 2015 at 03:28:22PM -0400, John Mikes wrote:
> > > Jason, Russell, Stathis, Brent
> > >
> > > I am not a Platonian, not a physicist and not a believer, just an
> > agnostic
> > > (in my OWN sense of the term). I don't believe that an algorithm
> > *"DOES"*,
> > > or *"ACTS" * so it cannot be 'held responsible'. We, the People do all
> > > this.
> > >
> > > *Russell's* 'sense of agency' requires more than included in (my) a set
> > of
> > > computing rules in an algorithm.
> >
> > Then you have not met an algorithm whose output is directly influenced
> > by the environment. Most robots are agents in this sense. If the agents are
> > processing and reacting to rules, then those agents can be punished
> > for breaking the rules.
> >
> > > The remark "company is not a person" is
> > > lately debated by the USA Supreme Court statement that a COMPANY IS A
> > > PERSON (just as MONEY is FREE SPEECH!) - what I tend to disagree with.
> >
> > People use language in different ways. Person, as we use it in this
> > list, refers to a conscious entity. The legal notion of person is more
> > one of agency, than consciousness.
> >
> > >
> > > *Brent*  wrote on the tpic with closer relation to how I feel about it:
> > >  * The ability to have an AI's future behavior changed by us, the
> > >  community,  assigning responsibility.   Note that this assumes *
> > > * there is a community of  intelligent beings,...*
> > > recalling the 'active agent' role of 'intelligent' beings.
> > >
> >
> > That's in full concordance with what I wrote :)
> >
> >
> > --
> >
> >
> > 
> > Prof Russell Standish  Phone 0425 253119 (mobile)
> > Principal, High Performance Coders
> > Visiting Professor of Mathematics  hpco...@hpcoders.com.au
> > University of New South Wales  http://www.hpcoders.com.au
> >
> > 
> >
> > --
> > You received this message because you are subscribed to the Google Groups
> > "Everything List" group.
> > To unsubscribe from this group and stop receiving emails from it, send an
> > email to everything-list+unsubscr...@googlegroups.com.
> > To post to this group, send email to everything-list@googlegroups.com.
> > Visit this group at http://groups.google.com/group/everything-list.
> > For more options, visit https://groups.google.com/d/optout.
> >
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" 

Re: Fwd: Responsibility and Personhood

2015-10-30 Thread John Mikes
You wrote:




*"Then you have not met an algorithm whose output is directly influencedby
the environment. Most robots are agents in this sense. If the agents
areprocessing and reacting to rules, then those agents can be punishedfor
breaking the rules." *

As I understand: a 'robot' is not an 'algorithm', I still cannot imagine a
set of (math?) rules with an OUTPUT.
Maybe I do not understand what an algorithm is? what kind of an "AGENT" it
is? what does if ACT? as I think, algorithms have to be followed by an
agent.

And thanks for your wisdom on 'personhood'.

JM

On Thu, Oct 29, 2015 at 6:21 PM, Russell Standish 
wrote:

> On Thu, Oct 29, 2015 at 03:28:22PM -0400, John Mikes wrote:
> > Jason, Russell, Stathis, Brent
> >
> > I am not a Platonian, not a physicist and not a believer, just an
> agnostic
> > (in my OWN sense of the term). I don't believe that an algorithm
> *"DOES"*,
> > or *"ACTS" * so it cannot be 'held responsible'. We, the People do all
> > this.
> >
> > *Russell's* 'sense of agency' requires more than included in (my) a set
> of
> > computing rules in an algorithm.
>
> Then you have not met an algorithm whose output is directly influenced
> by the environment. Most robots are agents in this sense. If the agents are
> processing and reacting to rules, then those agents can be punished
> for breaking the rules.
>
> > The remark "company is not a person" is
> > lately debated by the USA Supreme Court statement that a COMPANY IS A
> > PERSON (just as MONEY is FREE SPEECH!) - what I tend to disagree with.
>
> People use language in different ways. Person, as we use it in this
> list, refers to a conscious entity. The legal notion of person is more
> one of agency, than consciousness.
>
> >
> > *Brent*  wrote on the tpic with closer relation to how I feel about it:
> >  * The ability to have an AI's future behavior changed by us, the
> >  community,  assigning responsibility.   Note that this assumes *
> > * there is a community of  intelligent beings,...*
> > recalling the 'active agent' role of 'intelligent' beings.
> >
>
> That's in full concordance with what I wrote :)
>
>
> --
>
>
> 
> Prof Russell Standish  Phone 0425 253119 (mobile)
> Principal, High Performance Coders
> Visiting Professor of Mathematics  hpco...@hpcoders.com.au
> University of New South Wales  http://www.hpcoders.com.au
>
> 
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Responsibility and Personhood

2015-10-30 Thread Bruno Marchal


On 29 Oct 2015, at 03:11, Jason Resch wrote:

At some level, an algorithm cannot be held responsible for its  
actions because it was doing the only thing it could do, what it was  
programmed to do. At some point between a simplistic algorithm and a  
human level AI, however, we seem able to assigning responsibility/ 
culpability. What does an algorithm minimally have to have before it  
reaches this point?


The ability to learn?
Understanding of the consequences of its actions?
Rights that it cares about?
Personhood?

I am most interested to hear the thoughts of others on this list on  
this question.



Of course I will use "my theory", which is the theory of the universal  
machine, if we accept the classical, antic definition of knowledge.


To jude a machine responsible, the machine needs a soul. let us accept  
that the soul is well approximated by the knower, which is the  
conscious or consistent entity obeying the axiom of T, and if  
introspective enough, or self-aware, T4. (It has no name and no  
represantation available to the machine cognition).


To get that soul, if it is the knower, and if the knower is the one  
who has beliefs (axioms, theorems) which can be (luckly or not) true,  
then from his we see that it needs a self, by which I mean a  
representation of itself, with respect to some universal number (at  
least one). That self is, for the PA machine, any piece of codes or  
machinery mechanically (in the language terms of the probable  
universal machine which run it) capable of self-reference, and for  
computers the tricks is in the diagonalization DX= F(XX) => DD =  
F(DD). This has been exploited fully  by Gödel to Solovay (and  
others), and shows that a platonist self obeys to the modal logics G  
and G*. It is the 3p self, and later we can explain that although a  
machine can have such self, they cannot identity their first "I" with  
it.  Yet to survive (a teleportation, or an amoeba type of self- 
repduction), that is a needed, and in part a result of compromise  
between the universal machine that the machine is, and the universal  
machine which run it (which eventually will be a more complex object.


That self is what I represent by the [], and a precise example is the  
definition of PA in the language of PA (à-la Gödel). It is the 3p  
self, on which the machine can bet. I limit myself to platonist  
rational believers.


By being platonist, I mean that for all arithmetical propositions A  
they believe (A v ~A), By being rational, it means If they believe  
that A -> B, and if they believe A, they believe B. Or written in  
arithmetic, a scheme of elemebtary truth [](A & (A -> B)) -> []B  
(which is equivalent to K [](p->q) -> []p -> []q.


Using Theaetetus, the knower is defined by []p & p, for each  
arithmetical statements p (and latter only on the sigma_1 one).


So, if you believe yourself in elementary arithmetic, you dispose of a  
good truth level to define more higher level concepts.


A human is more complex than PA; but if the human "brain" is digitally  
emulable, there is a level where the self is correct about itself. But  
all such notion of self is contradicted by the first person view (that  
is why "Who am I?" is a very good koan (the question Ramana Marhasi  
ask to meditate upon).


The machine's knower is not a machine, neither from the 3p view, nor  
from the 1p view. It can be a sort of 3p object for a more complex  
machine view. That is the method: ZF can study the theology of PA, and  
then lifit it interrogatively to itself, at its own risk and peril  
(like when saying "yes" to a doctor, or more general).


If the machine believes in computationalism, she can always define  
itself at the substitution level where there is no responsibility at  
all. Like if a lawyer of a murderer would say "My client has only  
obeyed  to Schoredinger equation, and so has no responsibility at  
all", which is not only invalid, but spurious as the member of the  
jury get by this the complete freedom to judge the guy responsible and  
guilty and deserving the electric chair and then adding: no worry we  
obeyed to the Schroedinger equation too, isn't it?


So the one which might be locally partially responsible is at some  
higher level, and somehow only the sound, but non nameable 1p knows,  
that is, if he consider himself responsible, or having a sense of  
regret or remorse.


Universal machine have universal goals, like searching the good. In  
collection of interacting universal machine (like with life), the  
notion of good automatically conflicts with what other machine can  
call good or not, but by abstraction universal machine can extend  
their range of self-identification, and get the relative views in an  
extended consistent picture, which exists or not.


Eventually, if the machine's soul is guilty or not, is a question to  
be answered by the judge invoking its intimate conviction, or by the  
members of a jury. Today, no hand made 

Re: Responsibility and Personhood

2015-10-30 Thread smitra
Responsibility/culpability is a feature of our own programming allowing 
us to modify the program of our closely related copies, including 
ourselves. If we have precise control of the source code of an AI then 
this notion is rather pointless as we can directly modify the code. 
However, an AI with the intellect of a human being cannot be programmed 
by us directly, because its program won't fit in our brains. Therefore 
we may need to influence it using the more primitive notions of 
responsibility/culpability.


Saibal

On 29-10-2015 03:11, Jason Resch wrote:

At some level, an algorithm cannot be held responsible for its actions
because it was doing the only thing it could do, what it was
programmed to do. At some point between a simplistic algorithm and a
human level AI, however, we seem able to assigning
responsibility/culpability. What does an algorithm minimally have to
have before it reaches this point?

The ability to learn?
Understanding of the consequences of its actions?

Rights that it cares about?
Personhood?

I am most interested to hear the thoughts of others on this list on
this question.

Jason

 --
 You received this message because you are subscribed to the Google
Groups "Everything List" group.
 To unsubscribe from this group and stop receiving emails from it,
send an email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to
everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list
[1].
 For more options, visit https://groups.google.com/d/optout [2].


Links:
--
[1] http://groups.google.com/group/everything-list
[2] https://groups.google.com/d/optout


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Responsibility and Personhood

2015-10-29 Thread Brent Meeker



On 10/28/2015 7:11 PM, Jason Resch wrote:
At some level, an algorithm cannot be held responsible for its actions 
because it was doing the only thing it could do, what it was 
programmed to do. At some point between a simplistic algorithm and a 
human level AI, however, we seem able to assigning 
responsibility/culpability. What does an algorithm minimally have to 
have before it reaches this point?


The ability to learn?
Understanding of the consequences of its actions?
Rights that it cares about?
Personhood?

I am most interested to hear the thoughts of others on this list on 
this question.


The ability to have an AI's future behavior changed by us, the 
community, assigning responsibility.   Note that this assumes there is a 
community of intelligent beings, so it's not just a matter of 
influencing the single AI; rather it's enforcing a rule so that all the 
intelligent beings in the community will be influenced.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Fwd: Responsibility and Personhood

2015-10-29 Thread John Mikes
Jason, Russell, Stathis, Brent

I am not a Platonian, not a physicist and not a believer, just an agnostic
(in my OWN sense of the term). I don't believe that an algorithm *"DOES"*,
or *"ACTS" * so it cannot be 'held responsible'. We, the People do all
this.

*Russell's* 'sense of agency' requires more than included in (my) a set of
computing rules in an algorithm. The remark "company is not a person" is
lately debated by the USA Supreme Court statement that a COMPANY IS A
PERSON (just as MONEY is FREE SPEECH!) - what I tend to disagree with.

*Brent*  wrote on the tpic with closer relation to how I feel about it:
 * The ability to have an AI's future behavior changed by us, the
 community,  assigning responsibility.   Note that this assumes *
* there is a community of  intelligent beings,...*
recalling the 'active agent' role of 'intelligent' beings.



On Wed, Oct 28, 2015 at 10:11 PM, Jason Resch  wrote:

> At some level, an algorithm cannot be held responsible for its actions
> because it was doing the only thing it could do, what it was programmed to
> do. At some point between a simplistic algorithm and a human level AI,
> however, we seem able to assigning responsibility/culpability. What does an
> algorithm minimally have to have before it reaches this point?
>
> The ability to learn?
> Understanding of the consequences of its actions?
> Rights that it cares about?
> Personhood?
>
> I am most interested to hear the thoughts of others on this list on this
> question.
>
> Jason
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Fwd: Responsibility and Personhood

2015-10-29 Thread Russell Standish
On Thu, Oct 29, 2015 at 03:28:22PM -0400, John Mikes wrote:
> Jason, Russell, Stathis, Brent
> 
> I am not a Platonian, not a physicist and not a believer, just an agnostic
> (in my OWN sense of the term). I don't believe that an algorithm *"DOES"*,
> or *"ACTS" * so it cannot be 'held responsible'. We, the People do all
> this.
> 
> *Russell's* 'sense of agency' requires more than included in (my) a set of
> computing rules in an algorithm. 

Then you have not met an algorithm whose output is directly influenced
by the environment. Most robots are agents in this sense. If the agents are
processing and reacting to rules, then those agents can be punished
for breaking the rules.

> The remark "company is not a person" is
> lately debated by the USA Supreme Court statement that a COMPANY IS A
> PERSON (just as MONEY is FREE SPEECH!) - what I tend to disagree with.

People use language in different ways. Person, as we use it in this
list, refers to a conscious entity. The legal notion of person is more
one of agency, than consciousness.

> 
> *Brent*  wrote on the tpic with closer relation to how I feel about it:
>  * The ability to have an AI's future behavior changed by us, the
>  community,  assigning responsibility.   Note that this assumes *
> * there is a community of  intelligent beings,...*
> recalling the 'active agent' role of 'intelligent' beings.
> 

That's in full concordance with what I wrote :)


-- 


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Responsibility and Personhood

2015-10-29 Thread Stathis Papaioannou
On Thursday, 29 October 2015, Jason Resch  wrote:

> At some level, an algorithm cannot be held responsible for its actions
> because it was doing the only thing it could do, what it was programmed to
> do. At some point between a simplistic algorithm and a human level AI,
> however, we seem able to assigning responsibility/culpability. What does an
> algorithm minimally have to have before it reaches this point?
>
> The ability to learn?
> Understanding of the consequences of its actions?
> Rights that it cares about?
> Personhood?
>
> I am most interested to hear the thoughts of others on this list on this
> question.
>

Consider when we don't hold people responsible for their behaviour: if they
have a severe intellectual disability, mental illness or other medical
condition which results in them not understanding the consequences of their
actions, a corollary of which is that the threat of punishment cannot deter
them or others like them.

-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Responsibility and Personhood

2015-10-29 Thread Brent Meeker



On 10/29/2015 6:44 PM, Jason Resch wrote:



On Wed, Oct 28, 2015 at 11:18 PM, Russell Standish 
> wrote:


On Wed, Oct 28, 2015 at 09:11:34PM -0500, Jason Resch wrote:
> At some level, an algorithm cannot be held responsible for its
actions
> because it was doing the only thing it could do, what it was
programmed to
> do. At some point between a simplistic algorithm and a human
level AI,
> however, we seem able to assigning responsibility/culpability.
What does an
> algorithm minimally have to have before it reaches this point?
>
> The ability to learn?
> Understanding of the consequences of its actions?
> Rights that it cares about?
> Personhood?
>

None of those things are required to assign legal responsibility. For
example, a company can be held legally responsible, but a company is
not a person, nor is it conscious, nor need it learn (although
companies can learn).

I think all that is required is a sense of agency, that holding
something responsible is sufficient to affect the actions of that
agent.

If a robot can process the notion of responsibility such that its
actions will be affected by it, then yes it can be held responsible
regardless of whether any conscious understanding exists.


What are the minimum requirements to program agency?

It seems to be, if a program cannot learn/alter itself, it cannot be 
held responsible, for it is doing only what it was programmed to do.


Suppose the program is designed to learn from experience.  Then when it 
learns from experience, does that mean it's altering itself or just 
doing what it was programmed to do?


See Daniel Dennett's excellent little book "Elbow Room".

Brent
You can escape responsibility for everything, if you make yourself small 
enough.

--- Daniel Dennett

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Responsibility and Personhood

2015-10-29 Thread Jason Resch
On Thu, Oct 29, 2015 at 10:37 PM, Brent Meeker  wrote:

>
>
> On 10/29/2015 6:44 PM, Jason Resch wrote:
>
>
>
> On Wed, Oct 28, 2015 at 11:18 PM, Russell Standish <
> li...@hpcoders.com.au> wrote:
>
>> On Wed, Oct 28, 2015 at 09:11:34PM -0500, Jason Resch wrote:
>> > At some level, an algorithm cannot be held responsible for its actions
>> > because it was doing the only thing it could do, what it was programmed
>> to
>> > do. At some point between a simplistic algorithm and a human level AI,
>> > however, we seem able to assigning responsibility/culpability. What
>> does an
>> > algorithm minimally have to have before it reaches this point?
>> >
>> > The ability to learn?
>> > Understanding of the consequences of its actions?
>> > Rights that it cares about?
>> > Personhood?
>> >
>>
>> None of those things are required to assign legal responsibility. For
>> example, a company can be held legally responsible, but a company is
>> not a person, nor is it conscious, nor need it learn (although
>> companies can learn).
>>
>> I think all that is required is a sense of agency, that holding
>> something responsible is sufficient to affect the actions of that
>> agent.
>>
>> If a robot can process the notion of responsibility such that its
>> actions will be affected by it, then yes it can be held responsible
>> regardless of whether any conscious understanding exists.
>>
>
> What are the minimum requirements to program agency?
>
> It seems to be, if a program cannot learn/alter itself, it cannot be held
> responsible, for it is doing only what it was programmed to do.
>
>
> Suppose the program is designed to learn from experience.  Then when it
> learns from experience, does that mean it's altering itself or just doing
> what it was programmed to do?
>

Both, but I would argue a program that cannot change itself cannot be held
responsible for its actions. If, every time it is given the same input, it
gives the same output (e.g. if it is memoryless) then punishment can have
no effect, it is what it is.

Jason


>
> See Daniel Dennett's excellent little book "Elbow Room".
>
> Brent
> You can escape responsibility for everything, if you make yourself small
> enough.
> --- Daniel Dennett
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Responsibility and Personhood

2015-10-29 Thread Brent Meeker



On 10/29/2015 9:57 PM, Jason Resch wrote:



On Thu, Oct 29, 2015 at 10:37 PM, Brent Meeker > wrote:




On 10/29/2015 6:44 PM, Jason Resch wrote:



On Wed, Oct 28, 2015 at 11:18 PM, Russell Standish
> wrote:

On Wed, Oct 28, 2015 at 09:11:34PM -0500, Jason Resch wrote:
> At some level, an algorithm cannot be held responsible for
its actions
> because it was doing the only thing it could do, what it
was programmed to
> do. At some point between a simplistic algorithm and a
human level AI,
> however, we seem able to assigning
responsibility/culpability. What does an
> algorithm minimally have to have before it reaches this point?
>
> The ability to learn?
> Understanding of the consequences of its actions?
> Rights that it cares about?
> Personhood?
>

None of those things are required to assign legal
responsibility. For
example, a company can be held legally responsible, but a
company is
not a person, nor is it conscious, nor need it learn (although
companies can learn).

I think all that is required is a sense of agency, that holding
something responsible is sufficient to affect the actions of that
agent.

If a robot can process the notion of responsibility such that its
actions will be affected by it, then yes it can be held
responsible
regardless of whether any conscious understanding exists.


What are the minimum requirements to program agency?

It seems to be, if a program cannot learn/alter itself, it cannot
be held responsible, for it is doing only what it was programmed
to do.


Suppose the program is designed to learn from experience.  Then
when it learns from experience, does that mean it's altering
itself or just doing what it was programmed to do?


Both, but I would argue a program that cannot change itself cannot be 
held responsible for its actions. If, every time it is given the same 
input, it gives the same output (e.g. if it is memoryless) then 
punishment can have no effect, it is what it is.


I agree, except that judicial punishment is meant for the rest of 
society to, not just the miscreant.


I just watched episode 3 of "The Brain with Dave Eagleman".  He tells of 
a Canadian man who fell asleep in front of his TV one night, then got up 
and drove the home of his in-laws, enter their house and murdered his 
mothe in-law and strangled (but did not kill) his father in-law, and 
then drove to police station and said, "I think I've killed someone."  
He didn't remember any of it.  He did it all sleepwalking.  The jury 
acquitted him after being presented evidence that most members of his 
family had sleep disorders and had done things while sleepwalking.


They were convince that he wasn't responsible.   But I think they made a 
mistake.  The conscious part of him might not be responsible, but his 
subconscious was.  And if his subconscious was aware enough of the world 
to develop an animus toward his in-laws then it could also be influenced 
by judicial punishment.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Responsibility and Personhood

2015-10-29 Thread Jason Resch
On Wed, Oct 28, 2015 at 11:18 PM, Russell Standish 
wrote:

> On Wed, Oct 28, 2015 at 09:11:34PM -0500, Jason Resch wrote:
> > At some level, an algorithm cannot be held responsible for its actions
> > because it was doing the only thing it could do, what it was programmed
> to
> > do. At some point between a simplistic algorithm and a human level AI,
> > however, we seem able to assigning responsibility/culpability. What does
> an
> > algorithm minimally have to have before it reaches this point?
> >
> > The ability to learn?
> > Understanding of the consequences of its actions?
> > Rights that it cares about?
> > Personhood?
> >
>
> None of those things are required to assign legal responsibility. For
> example, a company can be held legally responsible, but a company is
> not a person, nor is it conscious, nor need it learn (although
> companies can learn).
>
> I think all that is required is a sense of agency, that holding
> something responsible is sufficient to affect the actions of that
> agent.
>
> If a robot can process the notion of responsibility such that its
> actions will be affected by it, then yes it can be held responsible
> regardless of whether any conscious understanding exists.
>

What are the minimum requirements to program agency?

It seems to be, if a program cannot learn/alter itself, it cannot be held
responsible, for it is doing only what it was programmed to do.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Responsibility and Personhood

2015-10-28 Thread Russell Standish
On Wed, Oct 28, 2015 at 09:11:34PM -0500, Jason Resch wrote:
> At some level, an algorithm cannot be held responsible for its actions
> because it was doing the only thing it could do, what it was programmed to
> do. At some point between a simplistic algorithm and a human level AI,
> however, we seem able to assigning responsibility/culpability. What does an
> algorithm minimally have to have before it reaches this point?
> 
> The ability to learn?
> Understanding of the consequences of its actions?
> Rights that it cares about?
> Personhood?
> 

None of those things are required to assign legal responsibility. For
example, a company can be held legally responsible, but a company is
not a person, nor is it conscious, nor need it learn (although
companies can learn).

I think all that is required is a sense of agency, that holding
something responsible is sufficient to affect the actions of that
agent.

If a robot can process the notion of responsibility such that its
actions will be affected by it, then yes it can be held responsible
regardless of whether any conscious understanding exists. 


-- 


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Responsibility and Personhood

2015-10-28 Thread Jason Resch
At some level, an algorithm cannot be held responsible for its actions
because it was doing the only thing it could do, what it was programmed to
do. At some point between a simplistic algorithm and a human level AI,
however, we seem able to assigning responsibility/culpability. What does an
algorithm minimally have to have before it reaches this point?

The ability to learn?
Understanding of the consequences of its actions?
Rights that it cares about?
Personhood?

I am most interested to hear the thoughts of others on this list on this
question.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.