At the 15th annual meeting of Slavic Linguistics Society, a presentation on ‘Phonetic distance in cross-lingual priming: Evidence from Bulgarian, Czech, Polish and Russian’ is held by Jacek Kudera and Philip Georgis (Project C4).
Check out the new article by Vera Demberg and Tim Schröder on smart voice assistants published in the latest issue of DFG magazine “forschung”.
Read the full article here.
ABOUT THE WORKSHOP
Language is acquired, used, and evaluated by understanding the world around us. It is thus essential to capture such an understanding by exploiting knowledge from sources that are useful for grounding language. Recent work showed the potential of visually-grounded language in addressing task-specific challenges (e.g., visual captioning, VQA, dialog, etc.). In this workshop, we aim to go beyond the task-specific integration of language and vision, and encourage submissions that leverage knowledge from external sources that are either provided by an environment or some fixed knowledge.
Our motto: When you are groping in the dark, knowledge gives you the light!
TOPICS OF INTEREST
Topics of interest include, but are not limited to:
- Application of language and vision to robotics
- Cognitively- and neuroscience-driven vision and language learning (eye-tracking, fMRI, etc.)
- Common-sense knowledge acquisition from vision
- Enhancing visual perception with language and structured knowledge
- Human-robot interaction with language understanding and visual perception
- Integration of vision and language by building cross-modal relationship networks
- Integrated models of real-world knowledge, vision, and language for generating context-sensitive embeddings
- Language and vision for learning games
- Learning of quantities from vision
- Multi-task learning for integration of language and vision
- Reasoning with language to improve visual perception
- Text-to-Image (natural, sketch, synthetic) generation with external knowledge
- Theoretical understanding of limitations in the integration of vision and language
- Visual dialog, captioning and Q&A by incorporating common-sense/real-world knowledge
- Other novel tasks which combine language and vision with means of external knowledge
Deadline for submission: August 21, 2020
Notification of acceptance: September 24, 2020
Deadline for camera-ready version: October 11, 2020
Workshop date: December 13, 2020
All deadlines are calculated at 11:59pm Pacific Daylight Savings Time (UTC – 12h)
Yonatan Bisk, Microsoft Research & CMU
Gemma Boleda, Universitat Pompeu Fabra & ICREA
Angeliki Lazaridou, DeepMind
Stefan Lee, Oregon State University
We solicit two categories of papers: long and short workshop papers that will be included in the workshop proceedings as archival publications. All submissions should be in PDF format and made through the Softconf link (https://www.softconf.com/coling2020/LANTERN/)
Submissions will go through a double-blind review process, where each submission is reviewed by at-least two program committee members. Accepted papers will be presented by the authors in a regular workshop session either as a talk or a poster.
All submissions must be written in English and follow the COLING 2020 formatting requirements using either Word or LaTeX template files provided by COLING 2020 (https://coling2020.org/).
- Long paper submission: up to 9 pages of content, plus bibliography
- Short paper submission: up to 4 pages of content, plus bibliography
The co-chairs of the workshop can be contacted by email at: email@example.com
Aditya Mogadala, Saarland University
Sandro Pezzelle, University of Amsterdam
Dietrich Klakow, Saarland University
Marie-Francine Moens, KU Leuven
Zeynep Akata, University of Tübingen
A joint paper by members of Projects B6 and B7 has been accepted for publication and will be presented at the International Conference on Spoken Language Translation @ACL 2020!
Bizzoni, Yuri; Juzek, Tom S; España-Bonet, Cristina; Chowdhury, Koel Dutta; van Genabith, Josef; Teich, Elke
How Human is Machine Translationese? Comparing Human and Machine Translations of Text and Speech
Proceedings of the 17th International Conference on Spoken Language Translation ACL (IWSLT 2020)
Two papers by members of Project C4 have been accepted for publication and will be available soon:
“Visual vs. auditory perception of Bulgarian stimuli by Russian native speakers” (26th International Conference on Computational Linguistics and Intellectual Technologies Dialogue 2020) by Irina Stenger and Tania Avgustinova
“The INCOMSLAV Platform: Experimental Website with Integrated Methods for Measuring Linguistic Distances and Asymmetries in Receptive Multilingualism” (Citizen Linguistics in Language Resource Development Workshop (CLLRD 2020) at the Language Resources and Evaluation Conference (LREC 2020)) by Irina Stenger, Klára Jágrová and Tania Avgustinova
About 2000 respondents have already taken part in our web-based intercomprehension experiments (http://intercomprehension.coli.uni-saarland.de/en/). Thank you all for participating and/or forwarding the link to as many people as possible!
We still need many, many participants from various (Slavic) language backgrounds. Furthermore, we have NEW audio experiments with Bulgarian, Czech, Polish, and Russian stimuli!