Designing abstract meaning representationsMartha Palmer - Department of Linguistics - University of Colorado at Boulder
Designing abstract meaning representations
Department of Linguistics
University of Colorado at Boulder
Abstract Meaning Representations (AMRs) provide a single, graph-based semantic representation that abstracts away from the word order and syntactic structure of a sentence, resulting in a more language-neutral representation of its meaning. Current versions of AMRs capture nested predicate argument structures with PropBank-style semantic role labels, Named Entity tags, coreference, discourse relations and explicit interpretations of modality and negation. As such, they are similar to, but go well beyond, efforts towards extended Semantic Role Labeling. AMRs implement a simplified, standard neo-Davidsonian semantics. A word in a sentence either maps to a concept or a relation or is omitted if it is already inherent in the representation or it conveys inter-personal attitude (e.g., stance or distancing).
The basis of AMR is PropBank’s lexicon of coarse-grained senses of verb, noun and adjective relations as well as the roles associated with each sense (each lexicon entry is a ‘roleset’). By marking the appropriate roles for each sense, this level of annotation provides information regarding who is doing what to whom. However, unlike PropBank, AMR provides a deeper level of representation of discourse relations, non-relational noun and prepositional phrases, quantities and time expressions (which PropBank largely leaves unanalyzed). Additionally, AMR makes a greater effort to abstract away from language-particular syntactic facts, instead attempting to generalize what can be thought of as different ways of saying the same thing. This talk will explore the differences between PropBank and AMR, the current and future plans for AMR annotation, and the potential of AMR as a basis for machine translation.
If you would like to meet with the speaker, please contact Annemarie Friedrich.
Prof. Martha Palmer holds joint appointments in Linguistics and Computer Science departments at the University of Colorado, and directed the international Linguistic Institute 2011. She is an ACL Fellow. Her research has focused on capturing elements of the meanings of words that can comprise automatic representations of complex sentences and documents. To support supervised machine-learning techniques, she and her students produce annotations and are engaged in training automatic sense taggers and semantic role labelers, funded by NSF and DARPA. Recently, with NIH funding, they applied these methods to biomedical journal articles and clinical notes. She is an editor of Linguistic Issues in Language Technology, and has been on the Editorial Board of Computational Linguistics and a co-Editor of the Journal of Natural Language Engineering.