*Call for papers for the AAAI Spring Symposium  Beyond Curve Fitting:
Causation, Counterfactuals, and Imagination-based AI*March 25-27 2019 @
Stanford, CA, USA
https://why19.causalai.net/

*Motivation*
 In recent years, Artificial Intelligence and Machine Learning have
received enormous attention from the general public, primarily because of
the successful application of deep neural networks in computer vision,
natural language processing, and game playing (more notably through
reinforcement learning). We see AI recognizing faces with high accuracy,
Alexa answering English spoken questions efficiently, and Alpha-Zero
beating Go grandmasters. These are impressive achievements, almost
unimaginable a few years ago. Despite the progress, there is a growing
segment of the scientific community that questions whether these successes
can be extrapolated to create general AI without a major retooling.
Prominent scholars voice concerns that some critical pieces of the
AI-puzzle are still pretty much missing. For example, Judea Pearl, who
championed probabilistic reasoning in AI and causal inference, recently
said in an interview: "To build truly intelligent machines, teach them
cause and effect" (link
<https://www.quantamagazine.org/to-build-truly-intelligent-machines-teach-them-cause-and-effect-20180515/>).
In a recent OpEd in the New York Times, Cognitive Scientist Gary Marcus
noted: “Causal relationships are where contemporary machine learning
techniques start to stumble” (link
<https://www.nytimes.com/2018/10/20/opinion/sunday/ai-fake-news-disinformation-campaigns.html>).


 These and other critical views regarding different aspects of the machine
learning toolbox, however, are not a matter of speculation or personal
taste, but a product of mathematical analyses concerning the intrinsic
limitations of data-centric systems that are not guided by explicit models
of reality. Such systems may excel in learning highly complex functions
connecting input X to an output Y, but are unable to reason about cause and
effect relations or environment changes, be they due to external actions or
acts of imagination. Nor can they provide explanations for novel
eventualities, or guarantee safety and fairness. This symposium will focus
on integrating aspects of causal inference with those of machine learning,
recognizing that the capacity to reason about cause and effect is critical
in achieving human-friendly AI. Despite its centrality in scientific
inferences and commonsense thinking, this capacity has been largely
overlooked in ML, most likely because it requires a language of its own,
beyond classical statistics and standard logics. Such languages are
available today and promise to yield more explainable, robust, and
generalizable intelligent systems.

 Our aim is to bring together researchers to discuss the integration of
causal, counterfactual, and imagination-based reasoning into data science,
building a richer framework for research and a new horizon of applications
in the coming decades. Our discussion will be inspired by the Ladder of
Causation and Structural Causal Models (SCM) architecture, which unifies
existing approaches to causation and formalizes the capabilities and
limitations of different types of causal expressions (link
<http://bayes.cs.ucla.edu/WHY/>). This architecture provides a general
framework of integrating the current correlation-based data mining methods
(level 1) with causal or interventional analysis (level 2), and
counterfactual or imagination-based reasoning (level 3). We welcome
researchers from all related disciplines, including, but not limited to,
computer science, cognitive science, economics, social sciences, medicine,
health sciences, engineering, mathematics, statistics, and philosophy.

*Topics*
 We invite papers that describe: 1. methods of answering causal questions
with the help of ML machinery, or 2. methods of enhancing ML performance
with the help of causal models (i.e., carriers of transparent causal
assumptions). Authors are strongly encouraged to identify in the paper
where on the causal hierarchy their contributions reside (i.e.,
associational, interventional, or counterfactual reasoning). Topics of
interest include but are not limited to the following:

1. Algorithms for causal inference and learning
2. Causal analysis of biases in data science & fairness analysis
3. Causal and counterfactual explanations
4. Causal reinforcement learning, planning, and plan recognition
5. Imagination and creativity
6. Fundamental limits of learning and inference
7. Applications and connections with the 3-layer hierarchy

*Submissions Guidelines*
 We solicit both long (7 pages including references) and short (3 pages
including references) papers on topics related to the above. Position
papers, application papers, and challenge tasks will also be considered.
Submissions should follow the AAAI conference format. We accept submission
through AAAI's Easychair (link:
https://easychair.org/account/signin.cgi?key=79967996.dYx0hnsHeTwDnyPq),
look for our symposium.

*Important Dates*
 Submissions (electronic submission, PDF) due: December 17, 2018
 Notifications of acceptance: January 27, 2019
 Final version of the papers (electronic submission, PDF) due: February 17,
2019

*Organizers*
 Elias Bareinboim (Purdue University)
 Sridhar Mahadevan (Adobe Research & UMass)
 Prasad Tadepalli Oregon State University
 Csaba Szepesvari (DeepMind & University of Alberta)
 Bernhard Scholkopf (Max Planck Institute)
 Judea Pearl (University of California, Los Angeles)
_______________________________________________
uai mailing list
[email protected]
https://secure.engr.oregonstate.edu/mailman/listinfo/uai

Reply via email to