Deep Learning Inside Out (DeeLIO) is the first workshop on knowledge extraction and integration for deep learning architectures. It aims to bring together the interpretation, extraction and integration lines of research, and cover the area in between. It will explore the introduction of external knowledge in deep learning models and representations, the types of linguistic and real-world knowledge neural nets encode, the extent to which this can be used for building resources, and whether this knowledge can be beneficial to them, by being re-integrated in the models, compared to external hand-crafted knowledge. Furthermore, the proposed workshop has a strong focus on structurally diverse languages with varying semantic-syntactic properties (going way beyond English) and low-data regimes. The workshop’s aim is also to inspire novel variation-aware transfer learning and multilingual solutions on how to use the knowledge from resource-rich languages (both extracted from large models as well as the knowledge from readily available repositories in the source language) to inform deep learning architectures where external repositories are scarce or missing. Unlike BlackboxNLP and other related initiatives, the focus of the DeeLIO workshop is on “deeper” lexico-semantic knowledge than can be recovered from or integrated into deep learning methods across a variety of languages.
Call For Paper
Topics of interest include, but are not limited to:
integration of external knowledge into neural networks (under the form of semantic specialization of embeddings, retrofitting, joint modeling, or other).
exploration of the types of linguistic and extra-linguistic world knowledge encoded in neural models, architectures and representations
extraction of linguistic and world knowledge from deep learning models
use of the knowledge extracted from deep learning models in practice (for resource enrichment, knowledge transfer to resource-lean languages, or other)
analyzing and understanding the limitations of the knowledge about language and the world acquired by current neural models
probing and analyzing different types of hand-crafted knowledge that can enhance ``blind’’ distributional models.
benefits of using external versus internally encoded knowledge, and their combination, for knowledge enhancement in neural networks
development and enrichment of lexico-semantic knowledge resources using deep learning models
(re)integration of (semi-)automatically compiled resources into deep learning models
using external knowledge in resource-lean languages through transfer techniques or joint multilingual modeling