Neural-Symbolic Integration and the Semantic Web

Tracking #: 2188-3401

Pascal Hitzler
Federico Bianchi
Monireh Ebrahimi
Md Kamruzzaman Sarker

Responsible editor: 
Guest Editor 10-years SWJ

Submission type: 
Neural-Symbolic Integration, as a field of research, addresses fundamental problems related to building a technical bridge between symbolic, logic-based systems and approaches, and subsymbolic, artificial neural network or deep learning based machine learning. In this paper, we lay out promises and possible benefits of neural-symbolic integration research for the Semantic Web, and also potential benefits of Semantic Web and neural-symbolic integration research for deep learning. We also provide a brief overview of some of the current research going on in relation to this theme.
Full PDF Version: 

Minor Revision

Solicited Reviews:
Click to Expand/Collapse
Review #1
By Claudia d'Amato submitted on 30/Jun/2019
Minor Revision
Review Comment:

The paper focuses on the issues raised within the neural-symbolic integration research direction and argues on the benefits that it may bring to the Semantic Web. A special attention is dedicated to results that have been achieved in deep learning and the value added that deep learning could provide to the Semantic Web. A brief survey of the main state of the art is also provided.

The paper focuses on a very important research direction that certainly deserve to be highlighted within the Semantic Web community. The paper is well written and overall it presents a clear story so far by pointing the main pros and cons of the symbolic and numeric approaches thus motivating the needs for the neural-symbolic integration. Somehow the goal of the paper is clarified at the end of section 1 when presenting the paper organization. However, a clear presentation of the paper goal at beginning of this section would definitely improve the paper readability.

Most of the analysis concerning symbol-based methods refers to solutions grounded on deductive reasoning. However, this is not always the case, e.g symbol-based solutions grounded on inductive and even abductive mechanisms have been proposed. A more broad discussion at this regards should be provided. A similar comment could be applied to section 2. Additionally, there are some solutions that have been proposed in the literature trying to integrate the benefit of inductive and deductive reasoning (e.g. Claudia d'Amato, Nicola Fanizzi, Floriana Esposito: Reasoning by Analogy in Description Logics Through Instance-based Learning. SWAP 2006; or even Nicola Fanizzi, Claudia d'Amato, Floriana Esposito: Inductive Classification of Semantically Annotated Resources through Reduced Coulomb Energy Networks. Int. J. Semantic Web Inf. Syst. 5(4): 19-38 (2009); Claudia d'Amato, Nicola Fanizzi, Bettina Fazzinga, Georg Gottlob, Thomas Lukasiewicz:
Ontology-based semantic search on the Web and its combination with the power of inductive reasoning. Ann. Math. Artif. Intell. 65(2-3): 83-121 (2012); Claudia d'Amato, Volha Bryl, Luciano Serafini: Data-Driven Logical Reasoning. URSW 2012: 51-62). This aspect could be also taken into account.

Additionally, the paper may benefit from a more clear and straightforward presentation of (more specific) challenges for the future.

The part of the paper concerning deep learning is well presented and the connection with the existing solutions is clearly provided. Some discussions concerning these approaches and solutions that have been proposed in the explainable AI context (e.g. Jiaoyan Chen, Freddy Lécué, Jeff Z. Pan, Ian Horrocks, Huajun Chen: Knowledge-Based Transfer Learning Explanation. KR 2018: 349-358) could be provided.

Overall, the paper offers a clear view of the value added and motivations for pursuing neural-symbolic integration and a comprehensive set of pointers to the main references on the topic.

Review #2
By Freddy Lecue submitted on 02/Jul/2019
Minor Revision
Review Comment:

Review on Neural-Symbolic Integration and the Semantic Web

This paper is a position paper presenting the main avenue of works towards the intersection of Neural Network-based and more symbolic (semantic web and alike) approaches.

Although the topic is very broad and large, the paper does not intend to be a survey paper. Relevant references are given, and interesting future research directions are given. However it is not obvious on how those problems could be tackled. Initial literature should be given to help future research to start from relevant references.

Some visual illustrations / (rough architectures) of the intertwined Neural Network and Symbolic based approaches would be a nice to to have in the paper.

Some limitations on what could not be addressed by the combination of both i.e., what are the contexts in which logics won't help Neural Network, and vice versa. There is a bit of this at the beginning when it is mentioned what each part is good at. But it is not clear what are the contexts in which we should not even try it. Is there any?

Review #3
By Dagmar Gromann submitted on 06/Jul/2019
Review Comment:

Very interesting position paper on the integration and divide of symbolic and subsymbolic systems with a particular focus on benefits from their integration within the context of the Semantic Web or vice versa how SW can be beneifical to deep learning. Overall, I think it is a well-written position paper with an excellent line of argumentation and many very interesting pointers to get people interested in the field started.

When discussing the possibilities of how to train an NN for a specific ontology or knowledge graph, I think it would be interesting to consider the scenario of transfer learning and domain adaptation. Currently, Section 2 hints at those phenomena by talking about training on one domain/resource and than re-using this pre-trained model. However, I think it would be helpful to directly address the terms used for those processes, i.e., transfer learning and domain adaptation. It would actually also be interesting to discuss whether transfer learning in its current format could even be applied to knowledge bases with relatively small (compared to DL datasets) and very often conflicting contents.

In terms of knowledge graph embeddings, it might be interesting to note that the majority of approaches focused on encoding triples as stand-alone entities and only recently encoding contexts, that is, entire KG paths, has become a research topic. A class of algorithms called path-ranking based models has started investigating this phenomenon, such as Yin et al. (2018) below (also Das and Palumbo relate to this).

If there is still space, it might be interesting to talk about integrations of knowledge graphs and NNs directly in the architecture itself, as it has been attempted in the case of Graph CNNs (see references below).

Additional references:
Even though many KG embedding approaches are provided, one that is interesting in terms of treatment of graph neighborhoods and which is frequently being re-used nowadays is missing, that is, the Node2Vec approach:
"node2vec: Scalable Feature Learning for Networks". A. Grover, J. Leskovec. ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), 2016.

Wenpeng Yin, Yadollah Yaghoobzadeh, and Hinrich Schütze. 2018. Recurrent one-hop predictions for reasoning over knowledge graphs. In COLING.
Rajarshi Das, Arvind Neelakantan, David Belanger,and Andrew McCallum. 2017. Chains of reasoning over entities, relations, and text using recurrent neural networks. In EACL.
Enrico Palumbo, Giuseppe Rizzo, Raphael Troncy, Elena Baralis, Michele Osella, and Enrico Ferro. 2018. Knowledge graph embeddings with node2vec for item recommendation. In European Semantic Web Conference. Springer.

Graph CNN:
Defferrard, M., Bresson, X., & Vandergheynst, P. (2016). Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in neural information processing systems (pp. 3844-3852).
Kipf & Welling (ICLR 2017), Semi-Supervised Classification with Graph Convolutional Networks (disclaimer: I'm the first author)
Hamilton, W., Ying, Z., & Leskovec, J. (2017). Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems (pp. 1024-1034).

Maybe cross-referencing papers from the issue could be included here, such as Freddy Lecue's submission on Explainable AI.

p. 3 Semantic Web Technologies => Semantic Web technologies
p. 5 While deep learning ...., but it => omit "but"
p. 5 "is producing" => produces