Neural Axiom Network for Knowledge Graph Reasoning

Tracking #: 3276-4490

Authors: 
Juan Li
Xiangnan Chen
Hongtao Yu
Jiaoyan Chen
Wen Zhang1

Responsible editor: 
Freddy Lecue

Submission type: 
Full Paper
Abstract: 
Knowledge graphs (KGs) generally suffer from incompleteness and incorrectness problems due to the automatic and semi-automatic construction process. Knowledge graph reasoning aims to infer new knowledge or detect noises, which is essential for improving the quality of knowledge graphs. In recent years, various KG reasoning techniques, such as symbolic- and embedding-based methods, have been proposed and shown strong reasoning ability. Symbolic-based reasoning methods infer missing triples according to predefined rules or ontologies. Although rules and axioms have proven to be effective, it is difficult to obtain them. While embedding-based reasoning methods represent entities and relations of a KG as vectors, and complete the KG via vector computation. However, they mainly rely on structural information, and ignore implicit axiom information that are not predefined in KGs but can be reflected from data. That is, each correct triple is also a logically consistent triple, and satisfies all axioms. In this paper, we propose a novel NeuRal Axiom Network (NeuRAN) framework that combines explicit structural and implicit axiom information. It only uses existing triples in KGs without introducing additional ontologies. Specifically, the framework consists of a knowledge graph embedding module that preserves the semantics of triples, and five axiom modules that encode five kinds of implicit axioms using entities and relations in triples. These axioms correspond to five typical object property expression axioms defined in OWL2, including ObjectPropertyDomain, ObjectPropertyRange, DisjointObjectProperties, IrreflexiveObjectProperty and AsymmetricObjectProperty. The knowledge graph embedding module and axiom modules respectively compute the scores that the triple conforms to the semantics and the corresponding axioms. Evaluations on KG reasoning tasks show the efficiency of our method. Compared with knowledge graph embedding models and CKRL, our method achieves comparable performance on noise detection and triple classification, and achieves significant performance on link prediction. Compared with TransE and TransH, our method improves the link prediction performance on the Hit@1 metric by 22.4% and 21.2% on WN18RR-10% dataset respectively.
Full PDF Version: 
Tags: 
Reviewed

Decision/Status: 
Accept

Solicited Reviews:
Click to Expand/Collapse
Review #1
By Freddy Lecue submitted on 06/Mar/2023
Suggestion:
Accept
Review Comment:

This manuscript was submitted as 'full paper' and should be reviewed along the usual dimensions for research contributions which include (1) originality, (2) significance of the results, and (3) quality of writing. Please also assess the data file provided by the authors under “Long-term stable URL for resources”. In particular, assess (A) whether the data file is well organized and in particular contains a README file which makes it easy for you to assess the data, (B) whether the provided resources appear to be complete for replication of experiments, and if not, why, (C) whether the chosen repository, if it is not GitHub, Figshare or Zenodo, is appropriate for long-term repository discoverability, and (4) whether the provided data artifacts are complete. Please refer to the reviewer instructions and the FAQ for further information.

Review #2
Anonymous submitted on 08/Mar/2023
Suggestion:
Accept
Review Comment:

Addressed previous comments and project is available on their github.

Review #3
Anonymous submitted on 21/Mar/2023
Suggestion:
Accept
Review Comment:

This paper presents an interesting approach to capture semantics in graph embedding approaches.
The manuscript has gone through a first revision and is, I think, looking good as is. My last recommendation for update would be to add a link to the datasets and source code in the paper. I noticed this link (https://github.com/JuanLi1621/NeuRAN) is present in the metadata on SWJ but could not find it in the paper. In addition to this the author should consider attaching a license to the source code.