Deep learning for noise-tolerant RDFS reasoning

Tracking #: 2186-3399

Authors: 
Bassem Makni
James Hendler

Responsible editor: 
Guest Editors Semantic Deep Learning 2018

Submission type: 
Full Paper
Abstract: 
Since the 2001 envisioning of the Semantic Web (SW) [1], the main research focus in SW reasoning has been on the soundness and completeness of reasoners. While these reasoners assume the veracity of input data, the reality is that the Web of data is inherently noisy. Although there has been recent work on noise-tolerant reasoning, it has focused on type inference rather than full RDFS reasoning. Even though RDFS closure generation can be seen as a Knowledge Graph (KG) completion problem, the problem setting is different— making KG embedding techniques that were designed for link prediction not suitable for RDFS reasoning. This paper documents a novel approach that extends noise-tolerance in the SW to full RDFS reasoning. Our embedding technique— that is tailored for RDFS reasoning— consists of layering RDF graphs and encoding them in the form of 3D adjacency matrices where each layer layout forms a graph word. Each input graph and its entailments are then represented as sequences of graph words, and RDFS inference can be formulated as translation of these graph words sequences, achieved through neural machine translation. Our evaluation on LUBM1 synthetic dataset shows 97% validation accuracy and 87.76% on a subset of DBpedia while demonstrating a noise-tolerance unavailable with rule-based reasoners.
Full PDF Version: 
Tags: 
Reviewed

Decision/Status: 
Accept

Solicited Reviews:
Click to Expand/Collapse
Review #1
By Dagmar Gromann submitted on 25/Apr/2019
Suggestion:
Accept
Review Comment:

Dear Bassem and Jim,

Thank you for your detailed and poignant consideration of all reviewers' comments across all the individual stages of the process.

Kind regards,
Dagmar