Title: Learning and Inference with hybrid models (three example
Date: Tuesday 6th of June at 16:00
Abstract: Hybrid models are the result of the integration of two types of models, i.e., deep neural networks and logical models. The former are capable to process high throughput data in continuous spaces; the latter class contains models which are capable to express knowledge about the abstract properties and relations between the observed data. There is no unique way to integrate these two classes of models. A first method exploits logical knowledge to impose additional supervision during the training of a (set of) neural models that predicts abstract properties and relations. This idea is implemented in Logic Tensor Networks (LTN) [1]. A second method consists in using the background knowledge to correct/revise the prediction done by a (set of) neural models. This method has been implemented Iterative Local Refinement (ILR) [2]. A third method consists in defining a hybrid model as the composition of a (set of) deep learning model(s), which abstract the continuous perceptions in a finite and abstract representation--the symbols--and a discrete model that computes a finite function on the set of abstract symbols. This method is described in Deep Symbolic Learning [3]. In the seminar, I will briefly introduce the hybrid model described above and the mechanism used for training and inference.
[1] Badreddine, S., Garcez, A. D. A., Serafini, L., & Spranger, M. (2022). Logic tensor networks. Artificial Intelligence, 303, 103649.
[2] Daniele, A., van Krieken, E., Serafini, L., & van Harmelen, F. (2023). Refining neural network predictions using background knowledge. Machine Learning, 1-39.
[3] DANIELE, Alessandro, CAMPARI, Tommaso, MALHOTRA, Sagar, et al. Deep Symbolic Learning: Discovering Symbols and Rules from Perceptions. arXiv preprint arXiv:2208.11561, 2022. (to appear in IJCAI 2023)