Benchmarking Semantic Reasoning on Mobile Platforms: Towards Optimization Using OWL2 RL

Tracking #: 1781-2994

William van Woensel
Syed Sibte Raza Abidi

Responsible editor: 
Thomas Lukasiewicz

Submission type: 
Full Paper
Mobile hardware has advanced to a point where apps may consume the Semantic Web of Data, as exemplified in domains such as mobile context-awareness, m-Health, m-Tourism and augmented reality. However, recent work shows that the performance of ontology-based reasoning, an essential Semantic Web building block, still leaves much to be desired on mobile platforms. This presents a clear need for developers to benchmark mobile reasoning performance, based on their particular application scenarios, i.e., including reasoning tasks, process flows and datasets, to establish the feasibility of mobile deployment. In this regard, we present a mobile benchmark framework called MobiBench to help developers to benchmark semantic reasoners on mobile platforms. To realize efficient mobile, ontology-based reasoning, OWL2 RL is a promising solution since (a) it trades expressivity for scalability, which is important on resource-constrained platforms; and (b) due to its rule-based axiomatization, it provides unique opportunities for optimization. In this vein, we propose selections of OWL2 RL rule subsets for optimization purposes, based on several orthogonal dimensions. We extended MobiBench to support OWL2 RL and the proposed ruleset selections, and benchmarked multiple OWL2 RL-enabled rule engines and OWL reasoners on a mobile platform. Our results show significant performance improvements by applying OWL2 RL rule subsets, allowing performant reasoning for small datasets on mobile systems.
Full PDF Version: 

Minor Revision

Solicited Reviews:
Click to Expand/Collapse
Review #1
By Nick Bassiliades submitted on 19/Dec/2017
Review Comment:

The authors have done an excellent job in adressing all comments and keeping the quality of the manuscript high.

Review #2
Anonymous submitted on 04/Feb/2018
Minor Revision
Review Comment:

Looking at the updated version of the paper, I find it better organised and clearer written than before. Apart of a number of minor issues listed below, I do not think it can be further improved significantly without major changing in the results. The contribution of the paper can be summarised as follows:

(1) a system is designed that allows to run and measure performance of ontology reasoners on mobile platforms using subsets and modifications of the set of OWL2 RL inference rules;

(2) several such subsets and modifications are designed, some of which are equivalent to the normative and some guarantee only incomplete reasoning;

(3) the experimental performance comparisons of the sets from (2) are given using the system from (1) for two existing well-established reasoners.

Contribution (1) is a good engineering effort and can be useful for OWL 2 RL reasoner designers.
Contribution (2) is slightly less convincing to me, because I often could not find any justification for the suggested sets of inference rules, except something like “we believe it may be useful”—that is, I do not see any explanations why these particular sets are better than many others possible.
Contribution (3) is even less convincing to me, because the (qualitative) results are often trivial. For example, I can tell without any experiments that if some inference rules are removed from a ruleset making it incomplete, then the materialisation is done quicker on any (reasonable) reasoner. Same holds if I delegate a part of reasoning to a preprocessing step. On the other hand, the quantitative part of the experimental results can have some value.

Personally, I find these contributions as a whole on a borderline of being publishable in the Semantic Web Journal. However, if other reviewers find the paper good enough, I suggest to listen to them when making the final decision.

List of minor issues:

— page 2: Section 22 should be Section 2 and Section 44 should be Section 4;
— page 4: it is not clear why variables are mentioned in T(?s, ?p, ?o);
— page 10: “subset (L1) of these rules list” -> “… lists”;
— page 12: a whitespace before # is missing;
— page 14: footnotes numbers are screwed up;
— page 15: meaning of “instances” in “instances D” in Theorem 1 is unclear;
— page 15: the border between the formulation of Theorem 1 and its border is unclear;
— page 15: notation R_0^- appears just ones in the paper, so its role is unclear;
— page 16: is it true that “the final result of sequentially applying …” is just an experimental check that Theorem 1 and the fact that “can be similarly shown” (page 15, end of column 1) hold? If I’m correct, then this result is useless, proved theorems do not need experimental confirmations.

Review #3
Anonymous submitted on 19/Feb/2018
Review Comment:

Most of my previous concerns have been addressed in this new version. However there still remain some points to improve a little bit more the understanding of your work:

* Section 6.6.1, although now organized into three subsubsubsections (too deep!) is still about four pages long. However subsections 6.1-6.5 are 1.5 page long only. Also notice that there does not exist any section 6.6.2, so the real problem is that section 6.6 is huge due to only one subsubsection (indeed, 6.6 = 6.6.1). So, remove the title 6.6.1, keep and fix the content, and perhaps fix title 6.6 accordingly, to avoid oversectioning. This will transform,, and into 6.6.1, 6.6.2, and 6.6.3. Think about grouping or minimizing 6.1-6.5, for instance, 6.5 could be just a paragraph before 6.1.

* I insist on my previous comment: Many times some text line begins or ends with a numeric reference. Please browse the whole text to fix this, just look for lines beginning or ending with a number or numeric reference. For instance, around [11] and (1) in page 2, and (a) by the end of 5.2.1. Please add non-breaking spaces always before and after references to avoid this problem.

* I insist on my previous comment: I don't understand what some hyphens mean, for instance, in "rule- and datasets, ...", "mobile- and server-deployed" (page 1), "rule- and datasets" (page 2). Please browse the whole text to fix this.

Other minor comments:

* The three points of paragraph "In this regard" in page 2, are very far from each other. Please itemize these three points for a better structure of that long paragraph.