We warmly invite you to submit a paper and participate in our Causal
Representation Learning workshop 
(https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcrl-workshop.github.io%2F&data=05%7C01%7Cuai%40engr.orst.edu%7Cfe6a4a6098e7479fdfec08db84440409%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C638249199893664758%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=%2FNZfhtZ%2BqQv9JmebYtJluoFbx4XqlRgLomGw1ZaEXaY%3D&reserved=0)
 that
will be held *December 15 or 16, 2023* at NeurIPS 2023, New Orleans, USA.


Causal Representation Learning is an exciting intersection of machine
learning and causality that aims at learning low-dimensional, high-level
causal variables along with their causal relations directly from raw,
unstructured data, e.g. images.


The submission deadline is *September 29, 2023, 23:59 AoE* and the
submission link is
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fopenreview.net%2Fgroup%3Fid%3DNeurIPS.cc%2F2023%2FWorkshop%2FCRL&data=05%7C01%7Cuai%40engr.orst.edu%7Cfe6a4a6098e7479fdfec08db84440409%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C638249199893664758%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=Sq18XiENQGGx5OPnYvlQv6xzAWkPFnx%2B%2BnXKZng3z04%3D&reserved=0.

More information below.

***MOTIVATION AND TOPICS***

Current machine learning systems have rapidly increased in performance by
leveraging ever-larger models and datasets. Despite astonishing abilities
and impressive demos, these models fundamentally *only learn from
statistical **correlations* and struggle at tasks such as *domain
generalisation, adversarial examples, or planning*, which require
higher-order cognition. This sole reliance on capturing correlations sits
at the core of current debates about making AI systems "truly'' understand.
One promising and so far underexplored approach for obtaining visual
systems that can go *beyond correlations* is integrating ideas from
causality into representation learning.

Causal inference aims to reason about the effect of interventions or
external manipulations on a system, as well as about hypothetical
counterfactual scenarios. Similar to classic approaches to AI, it typically
assumes that the causal variables of interest are given from the outset.
However, real-world data often comprises high-dimensional, low-level
observations (e.g., RGB pixels in a video) and is thus usually not
structured into such meaningful causal units.

To this end, the emerging field of causal representation learning (CRL)
combines the strengths of ML and causality. In CRL we aim at learning
low-dimensional, high-level causal variables along with their causal
relations directly from raw, unstructured data, leading to representations
that support notions such as causal factors, interventions, reasoning, and
planning. In this sense, CRL aligns with the general goal of modern ML to
learn meaningful representations of data that are more robust, explainable,
and performant, and in our workshop we want to catalyze research in this
direction.

This workshop brings together researchers from the emerging CRL community,
as well as from the more classical causality and representation learning
communities, who are interested in learning causal, robust, interpretable
and transferrable representations. Our goal is to foster discussion and
cross-fertilization between causality, representation learning and other
fields, as well as to engage the community in identifying application
domains for this emerging new field. In order to encourage discussions, we
will welcome submissions related to any aspect of CRL, including but not
limited to:

   -

   Causal representation learning, including self-supervised, multi-modal
   or multi-environment CRL, either in time series or in an atemporal setting,
   observational or interventional,
   -

   Causality-inspired representation learning, including learning
   representations that are only *approximately* causal, but still useful
   in terms of generalization or transfer learning,
   -

   Abstractions of causal models or in general multi-level causal systems,
   -

   Connecting CRL with system identification, learning differential
   equations from data or sequences of images, or in general connections to
   dynamical systems,
   -

   Theoretical works on identifiability in representation learning broadly,
   -

   Real-world applications of CRL, e.g. in biology, healthcare, (medical)
   imaging or robotics; including new benchmarks or datasets, or addressing
   the gap from theory to practice.



***IMPORTANT DATES***

Paper submission deadline: *September 29, 23:59 AoE *

Notification to authors: October 27, 2023, 23:59 AoE

Camera-ready version and videos: December 1, 2023, 23:59 AoE

Workshop Date: December 15 or 16, 2023 at NeurIPS


***SUBMISSION INSTRUCTIONS***

As for all NeurIPS workshops, submissions should contain original and
previously unpublished research and they should be formatted using the
NeurIPS latex style. Papers should be submitted as a PDF file and should be
maximum 6 pages in length, including all main results, figures, and tables.
Appendices containing additional details are allowed, but reviewers are not
expected to take this into account.

The workshop will not have proceedings (or in other words, it will not be
archival), which means you can submit the same or extended work as a
publication to other venues after the workshop. This means we also accept
(shortened versions of) submissions to other venues, as long as they are
not published before the workshop date in December.

Submission site:
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fopenreview.net%2Fgroup%3Fid%3DNeurIPS.cc%2F2023%2FWorkshop%2FCRL&data=05%7C01%7Cuai%40engr.orst.edu%7Cfe6a4a6098e7479fdfec08db84440409%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C638249199893820971%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=PwN5s3moWLjpdu22L7xzYbFCaWNMT3R3IK8wqcUZP18%3D&reserved=0



***ORGANIZERS***


Sara Magliacane, University of Amsterdam and MIT-IBM Watson AI Lab

Atalanti Mastakouri, Amazon

Yuki Asano, University of Amsterdam and Qualcomm Research

Claudia Shi, Columbia University and FAR AI

Cian Eastwood, University of Edinburgh and Max Planck Institute Tübingen

Sébastien Lachapelle, Mila and Samsung’s SAIT AI Lab (SAIL)

Bernhard Schölkopf, Max Planck Institute Tübingen

Caroline Uhler, MIT and Broad Institute
_______________________________________________
uai mailing list
uai@engr.orst.edu
https://it.engineering.oregonstate.edu/mailman/listinfo/uai

Reply via email to