**apologies for cross-postings**

 Call for papers  DMR 2023: The Fourth International Workshop on Designing 
Meaning
Representation 
workshop site: dmr2023.github.io
Co-located with IWCS 2023 the 15th International Conference on Computational 
Semantics,
20-23th June 2023, Université de Lorraine, Nancy, France.
IWCS site: https://iwcs2023.loria.fr/ 

While deep learning methods have led to many breakthroughs in practical natural 
language
applications, most notably in Machine Translation, Machine Reading, Question 
Answering,
Recognizing Textual Entailment, and so on, there is still a sense among many NLP
researchers that we have a long way to go before we can develop systems that 
can actually
“understand” human language and explain the decisions they make. Indeed, 
“understanding”
natural language entails many different human-like capabilities, and they 
include but are
not limited to the ability to track entities in a text, understand the 
relations between
these entities, track events and their participants described in a text, 
understand how
events unfold in time, and distinguish events that have actually happened from 
events that
are planned or intended, are uncertain, or did not happen at all. We believe a 
critical
step in achieving natural language understanding is to design meaning 
representations for
text that have the necessary meaning “ingredients” that help us achieve these
capabilities. Such meaning representations can also potentially be used to 
evaluate the
compositional generalization capacity of deep learning models.
There has been a growing body of research devoted to the design, annotation, 
and parsing
of meaning representations in recent years. The meaning representations that 
have been
used for semantic parsing research are developed with different linguistic 
perspectives
and practical goals in mind and have different formal properties. Formal meaning
representation frameworks such as Minimal Recursion Semantics (MRS) and 
Discourse
Representation Theory (as exemplified in the Parallel Meaning Bank) are 
developed with the
goal of supporting logical inference in reasoning-based AI systems and are 
therefore
easily translatable into first-order logic, requiring proper representation of 
semantic
components such as quantification, negation, tense, and modality. Other meaning
representation frameworks such as Abstract Meaning Representation (AMR), 
Tecto-grammatical
Representation (TR) in Prague Dependency Treebanks and the Universal Conceptual 
Cognitive
Annotation (UCCA), put more emphasis on the representation of core 
predicate-argument
structure, lexical semantic information such as semantic roles and word senses, 
or named
entities and relations. There is also a more recent effort in developing a 
Uniform Meaning
Representation (UMR) that is based on AMR but extends it to cross-linguistic 
settings and
enhances it to represent document-level semantic content. The automatic parsing 
of natural
language text into these meaning representations and the generation of natural 
language
text from these meaning representations are also very active areas of research, 
and a wide
range of technical approaches and learning methods have been applied to these 
problems.
This workshop will bring together researchers who are producers and consumers 
of meaning
representations, and through their interaction develop a deeper understanding 
of the key
elements of meaning representations that are the most valuable to the NLP 
community. The
workshop will also provide an opportunity for meaning representation 
researchers to
critically examine existing frameworks with the goal of using their findings to 
inform the
design of next-generation meaning representations. A third goal of the workshop 
is to
explore opportunities and identify challenges in the design and use of meaning
representations in multilingual settings. A final goal of the workshop is to 
understand
the relationship between distributed meaning representations trained on large 
data sets
using network models, and the symbolic meaning representations that are 
carefully designed
and annotated by NLP researchers and gain a deeper understanding of areas where 
each type
of meaning representation is the most effective.
The workshop solicits papers that address one or more of the following topics:
•       Design and annotation of meaning representations;
•       Cross-framework comparison of meaning representations;
•       Challenges and techniques in automatic parsing of meaning 
representations;
•       Challenges and techniques in automatically generating text from meaning
representations;
•       Meaning representation evaluation metrics;
•       Lexical resources, ontologies, and grounding in relation to meaning 
representations;
•       Real-world applications of meaning representations;
•       Issues in applying meaning representations to multilingual settings and 
lower-resourced
languages;
•       The relationship between symbolic meaning representations and 
distributed semantic
representations;
•       Formal properties of meaning representations;
•       Any other topics that address the design, processing, and use of meaning
representations.

=== SUBMISSION INFORMATION ===

Submissions should report original and unpublished research on topics of 
interest to the
workshop. Accepted papers are expected to be presented at the workshop and will 
be
published in the workshop proceedings on the ACL Anthology. They should 
emphasize obtained
results rather than intended work and should clearly indicate the state of 
completion of
the reported results. A paper accepted for presentation at the workshop must 
not be or
have been presented at any other meeting with publicly available proceedings.
Submission is electronic, using the Softconf START conference management system.
Link to the DMR submission site: https://softconf.com/iwcs2023/dmr2023/ 
Submissions must adhere to the two-column format of ACL venues. Please use our 
specific
style-files or the Overleaf template taken from ACL 2021:

https://www.overleaf.com/latex/templates/instructions-for-iwcs-2021-proceed…
 
Initial submissions should be fully anonymous to ensure double-blind reviewing. 
Long
papers must not exceed eight (8) pages of content. Short papers and 
demonstration papers
must not exceed four (4) pages of content. If a paper is accepted, it will be 
given an
additional page to address reviewers’ comments in the final version. References 
and
appendices do not count against these limits.

Reviewing of papers will be double-blind. Therefore, the paper must not include 
the
authors’ names and affiliations or self-references that reveal any author’s 
identity–e.g.,
“We previously showed (Smith, 1991) …” should be replaced with citations such 
as “Smith
(1991) previously showed …”. Papers that do not conform to these requirements 
will be
rejected without review.
Authors of papers that have been or will be submitted to other meetings or 
publications
must provide this information to the workshop organizers 
dmr2023-chairs(a)googlegroups.com.
Authors of accepted papers must notify the program chairs within 10 days of 
acceptance if
the paper is withdrawn for any reason.
** DMR 2023 does not have an anonymity period. However, we ask you to be 
reasonable and
not publicly advertise your preprint during (or right before) review.

=== IMPORTANT DATES ===
Submissions due                 April 3, 2023 - EXTENDED TO APRIL 10, 2023
Notification of acceptance      May 1, 2023
Camera-ready deadline           June 1, 2023
Workshop date           June 20, 2023
IWCS conference         June 20-23, 2023
_______________________________________________
Corpora mailing list -- [email protected]
https://list.elra.info/mailman3/postorius/lists/corpora.list.elra.info/
To unsubscribe send an email to [email protected]

Reply via email to