Friends Don't Let Friends Deploy Black-Box Models: The  Importance of
Intelligibility in Machine Learning is coming at 02/11/2019 - 4:00pm

Weniger Hall 151
Mon, 02/11/2019 - 4:00pm

Rich Caruana
Principal Researcher, Microsoft

Abstract:
In machine learning often a tradeoff must be made between accuracy and
intelligibility: the most accurate models (deep nets, boosted trees and
random forests) usually are not very intelligible, and the most intelligible
models (logistic regression, small trees and decision lists) usually are less
accurate. This tradeoff limits the accuracy of models that can be safely
deployed in mission-critical applications such as healthcare where being able
to understand, validate, edit, and ultimately trust a learned model is
important. We have developed a learning method based on generalized additive
models (GA2Ms) that is as accurate as full complexity models, but more
intelligible than linear models. In this talk I'll present a case study where
intelligibility is critical to uncover surprising patterns in the data that
would have made deploying a black-box model risky. I'll also show how we're
using these models to detect bias in domains where fairness and transparency
are paramount, and how these models can be used to understand what is learned
by black-box models such as deep nets.

Bio:

Read more:
http://eecs.oregonstate.edu/colloquium/friends-dont-let-friends-deploy-b... 
[1]


[1] 
http://eecs.oregonstate.edu/colloquium/friends-dont-let-friends-deploy-black-box-models-importance-intelligibility-machine
_______________________________________________
Colloquium mailing list
[email protected]
https://secure.engr.oregonstate.edu/mailman/listinfo/colloquium
  • [EECS Colloquium] ... School of Electrical Engineering & Computer Science
    • [EECS Colloqu... School of Electrical Engineering & Computer Science

Reply via email to