Contrastive Learning in Neural Tensor Networks using Asymmetric Examples
Neuro-Symbolic models combine the best of two worlds, knowledge representation capabilities of symbolic models and representation learning power of deep networks. In this paper, we develop a Neuro-Symbolic approach to infer unknown facts from relational data. A well-known approach is to use statistical relational models such as Markov Logic Networks (MLNs) to perform probabilistic inference. However, these approaches are known to be non-scalable and inaccurate for large, real-world problems. Therefore, given symbolic knowledge, we train a Neural Tensor Network (NTN) to learn representations for symmetries implied by the symbolic knowledge. Further, since the data is interconnected, predicting one fact can positively or negatively impact the prediction of other facts. Therefore, we train the NTN using open-world semantics over multiple possible worlds, learning to represent symmetries in each world. We evaluate our approach in several real-world benchmarks comparing with state-of-the-art relational learning methods, Neuro-Symbolic methods and purely symbolic methods clearly illustrating the generality, accuracy and scalability of our proposed approach.
Proceedings - 2021 IEEE International Conference on Big Data, Big Data 2021
Islam, M., Sarkhel, S., & Venugopal, D. (2021). Contrastive Learning in Neural Tensor Networks using Asymmetric Examples. Proceedings - 2021 IEEE International Conference on Big Data, Big Data 2021, 28-39. https://doi.org/10.1109/BigData52589.2021.9671631