"An AI system should be able to explain which rules were used in making a
decision - and who made those rules."

On Fri, May 10, 2019, 21:39 Steve Richfield <[email protected]>
wrote:

> I just watched a lengthy panel discussion about applying AI in the
> criminal justice system, e.g. whether to release people who have been
> arrested. While this is high impact, the problems they encounter are
> typical of nearly all AI applications.
>
> Whether deciding who to arrest, who to release, what stock to invest in,
> whose face it is, etc., there seems to be some really basic criteria to
> applying the "I" in AI. These systems do NOT meet the basic criteria of
> being able to show that they are not discriminatory, colluding, stealing,
> etc. This sets their users up to lose lawsuits, and imperils the public.
>
> ANY system that can't explain it's output is a STATISTICAL system that is
> NOT AI.
>
> We could probably develop and adopt a one-paragraph standard that works
> for everyone here - and in the process put us on the map.
>
> Anyone here interested?
>
> Steve
>
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> + delivery
> options <https://agi.topicbox.com/groups/agi/subscription> Permalink
> <https://agi.topicbox.com/groups/agi/Tc031666044462b42-M51c2bf814f2f2050dde4236f>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc031666044462b42-M20fc6b3b9b994f7ab6a421ab
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to