Resource-Constrained Reasoning Using a Reasoner Composition Approach

Tracking #: 446-1620

Wei Tai
John Keeney
Declan O'Sullivan

Responsible editor: 
Guest Editors Semantic Web For All

Submission type: 
Full Paper
The marriage between semantic web and sensor-rich systems can semantically link the data related to the physical world with the existent machined-readable domain knowledge encoded on the web. This has allowed a better understanding of the inherently heterogeneous sensor data by allowing data access and processing based on semantics. Research in different domains has been started seeking a better utilization of data in this manner. Such research often assumes that powerful centralized data processing and reliable network connections are available, which is not realistic in some critical situations. For such critical situations, a more robust semantic system needs to have some local-autonomy and hence on-device semantic data processing is required. As a key enabling part for this strategy, semantic reasoning needs to be developed for resource-constrained environments. This paper shows how reasoner composition (i.e. to automatically adjust a reasoning approach to preserve only the amount of reasoning needed for the ontology to be reasoned over) can achieve resource-efficient semantic reasoning. Two novel reasoner composition algorithms have been designed and implemented. Evaluation indicates that the reasoner composition algorithms greatly reduce the resources required for OWL reasoning, facilitating the deployment of semantic data processing on sensor devices.
Full PDF Version: 

Major Revision

Solicited Reviews:
Click to Expand/Collapse
Review #1
By Christophe Guéret submitted on 23/May/2013
Review Comment:

This paper describes a modified RETE algorithm that is able to select the rules to load and apply depending on the data at hand. The direct consequence of this, showed experimentally, is that less memory is consumed while keeping similar execution time. This is a well written and interesting paper that presents very interesting work.

In fact, the main question that comes to my mind when looking at the results is why does not everyone do it by default? The algorithm presented lower the memory usage without impacting the computation time. This sounds like an optimisation every reasoner could implement, even those not running under constrained environments. If there are any non visible drawbacks (untold or that I missed), the paper could be update to make them more visible.

The paper is good but there are some parts that could be cut down into sub-pieces, Section 3.3 for instance. Pages full of block of texts are not very attractive to read and harder to use when scouting for a particular information.

Review #2
By Aidan Hogan submitted on 05/Jun/2013
Major Revision
Review Comment:

The paper is centred around a RETE-based method for forward-chaining reasoning over pD* rules for OWL. The core problem addressed by the paper involves the overheads of conventional reasoning engines with respect to memory and with respect to time; hence the authors argue that many standard reasoning approaches are not well-suited to resource-constrained devices, such as sensors. Thus they propose a method that they call "reasoner composition", which essentially involves configuring the RETE reasoning process to (i) order join processing implicit in the rule bodies based on the cardinalities of the patterns using standard techniques, (ii) avoid duplicating data in memory for triple patterns that are equivalent up to variable renaming, (iii) turn off rules that a prior analysis of the ontology deems to not be needed. Whereas (i) clearly optimises for time (and thus, one could imagine battery consumption, etc.), (ii) and (iii) mainly optimise for space. The authors evaluate their composition techniques for a selection of small ontologies on an embedded microprocessor with limited resources and discuss the benefits that their techniques bring. They also compare their proposals against existing reasoners on conventional hardware, showing that their reasoning engine is also competitive with existing engines (that cannot themselves be run on the microprocessor).

In my opinion, the paper is on-topic for the special issue it is submitted under. The problem that the authors are tackling is interesting, non-trivial, clearly stated and well-justified. The paper is clearly written and self-contained and the methods are sufficiently well evaluated to support the conclusions.

In terms of what I liked most about the paper:

* The methods proposed by the authors are generally explained in a very clear manner, with good use of a running example. This makes it quite easy to understand the technical proposals (although perhaps formal definitions might be useful in certain cases to clarify some details).

* The evaluation was run on a resource-constrained device, showing the benefit of the proposed techniques on hardware that reflects the problem outlined in the introduction of the paper, that is relevant to the special issue, and that outlines the novelty of the author's work in relation to conventional reasoning algorithms.

* The authors do a very good job throughout highlighting not only the strengths, but the weaknesses of their proposals, giving a balanced discussion.

In terms of what I perceived to be weaknesses of the paper:

* The methods are not hugely novel in the respect that the authors make what I would deem to be "obvious" optimisations to the standard RETE algorithm. For example, a lot of discussion on join reordering is standard in query optimisation where, in the most similar case, nested loop joins are optimised by reordering join patterns based on estimates of cardinalities for individual patterns, as well as estimates for join cardinalities; in particular, the cardinality estimates used by the authors are restricted to those available from the input data, and thus the problem is exactly the same as has been tackled many times for query optimisers (e.g., compare the problem tackled in [1] to ordering the rule-body patterns for RETE in this paper).

* The evaluation is over what I would perceive to be very very small ontologies, particularly for a lightweight profile of OWL like pD*. Offsetting this concern is the fact that evaluation is on a resource-constrained device, but still, with the largest ontology being 1,080 triples in the microprocessor setting and 5,924 triples in the conventional setting, the limited scale of data under evaluation is a concern for me.

* The authors do not tackle the issue of completeness of their proposed RETE algorithm in the presence of omitting rules. The authors claim that "Rules not selected must have premises out of the scope of the ontology and therefore are not used in the reasoning ... without any loss of accuracy in the reasoning process". As I currently understand the analysis of OWL premises (which is not rigorously defined) this is not true in the presence of so-called non-standard use where RDFS/OWL constructs are extended. For example, if I had an ontology:

ex:speciesInOrder rdfs:subPropertyOf rdfs:subClassOf .
ex:HomoSapiens ex:speciesInOrder ex:Primate .
ex:Tim rdf:type ex:HomoSapiens .

... which is valid in RDF(S) or in OWL Full, I understand that sub-class rule rdfs9 would be turned off and that the valid inference "ex:Tim rdf:type ex:Primate" would be missed. (If the algorithm is not complete in these non-standard examples, this is not a fundamental issue, but claims of completeness need to be better justified in the presence of omitting rules, including more details on how the process of omitting rules is decided.)

* As far as I can tell, the authors look solely at low-scale in-memory reasoners. However, there is also a growing body of research on how to scale to comparatively very large inputs by minimising the amount of data that needs to be indexed in memory (e.g., [2,3,4,5,6]). Although these papers look at centralised reasoning over clusters of powerful hardware, they similarly tackle the issue of efficient reasoning and minimising overheads for similar rulesets to those mentioned in the paper under review. For example, a lot of these papers propose to only store terminological data in memory to minimise resources [4,6].

* At times, I wondered how resource usage could be optimised just by using good software engineering techniques. For example, the use of OIDs and a dictionary in the reasoning process could reduce memory overheads to a fraction of what storing full strings in memory would require; this is not tackled. Similarly, when the authors argue that "by sharing the alpha node, the total amount of memory allocated to these conditions can be cut to 1/n if n conditions share a same alpha node", this is really dependent on the specifics of the programming language and how the triples are stored: since triples are essentially immutable objects, they can be interned, where in Java for example, instead of duplicating the full triple object n times, one triple object can be referenced n times at close to (depending on reference:object size) 1/n of the cost.
In summary, I see weaknesses in the paper, but since it is relevant for the special issue and is generally well written, I think it should be accepted after the authors make the following revisions:

1. Discuss related work in the area of query optimisation and cardinality estimations for RDF (perhaps in the context of the discussion that ends Section 3), and also work on scalable rule-based reasoning for RDF.
2. Add evaluation of larger ontologies as a stress-test (i.e., to show the size of ontologies at which the reasoner can no longer work on the selected device). One option: use the standard LUBM benchmark at various sizes in the intra-reasoner configuration up to the point where reasoning is no longer possible.
3. Clarify the issue of completeness (if applicable) and clarify how rule exclusion is decided.
4. Clarify issues of what precisely is stored in memory and discuss why, e.g., dictionary techniques are not used.
5. Address the following minor comments.

* There's a number of typos and badly worded sentences in the paper so please revise again (only some are listed here).
* "The marriage ... can semantically link the data ..." The marriage can?! Be clearer.
* "This increases ..." What does?
* machined-readable -> machine-readable
* "Existing semantic reasoners are too resource-intensive ... This is particularly the case for rule-entailment reasoners". No it's not. Rule-based reasoners are typically analogous to Datalog and thus tractable no matter the rules/inputs and they can use secondary storage. Tableau reasoners can go up, e.g., N2ExpTime for OWL 2 DL inputs and usually require full in-memory computation.
* "checking the satisfiability of a DL concept ..." I'm not clear on how checking if it is possible to have an instance of something that is a Vehicle or not a Car is useful here. I think you want to check if it is possible to have something that is a Car and not a Vehicle?
* "where dedicated non-logical symbols need" ... What does this refer to? Surrogate blank nodes? I'm not sure what is meant: be more specific.
* "Compared to DLP style rules, which are ontology specific, each entailment rules ...". This is confusing. pD* covers as much if not more OWL semantics than DLP. pD* rules do not, however, carry any of the semantics of the ontology. Is this what is meant?
* "it still preserves" ... It doesn't preserve the full semantics and entailments for all of the listed constructs. pD* only has partial support for, e.g., cardinality restrictions, (some) values restrictions, etc.
* "To elaborates both ..." Rephrase.
* "populated. figure 5" Capitalise "figure".
* "Let C_1, C_2 ... C_{n-1} be a connected partial join sequence" Should be better defined. In particular, what is each C_i? What does connected mean? Does C_{i+1} need to share a join variable with C_{i} or C{j} for j<=i? (The meaning can be deduced but should be made explicit.)
* Figure 7 caption floats on wrong page.
* "A limitation of the selective rule loading algorithm ..." This is not only relating to terminological data, but also to, e.g., assertional triples with predicate owl:sameAs.
* "Finally a sample of Jena-compatible pD* rule set was authored." What does this mean? All pD* rules were written in Jena rule syntax? Or some pD* rules were omitted since they wouldn't be Jena-compatible? Why was it a "sample" rule set?
* "(PTIME combined complexity)". What is meant by combined complexity and what reasoning tasks are considered? See
* "mainly from the not constructing" ... rephrase

* "l.h.s."/"r.h.s." For RDF(S)/OWL, I've seen rules written with bodies on the left/heads on the right and heads on the left/bodies on the right. Saying body/head would be less ambiguous.
* "However, there is no evidence showing it will scale well ...". Clichéd as it might be for a reviewer to mention their own work, in [4] we showed methods by which a similar technique does scale well even in the presence of hundreds of thousands of rules assuming certain optimisations whereby you can quickly find only those rules relevant for a given triple [4]. I still believe that the idea of compiling the terminology of the ontology directly into a set of rules like RDFS/pD*/OWL 2 RL to create a set of DLP-style rules is competitive with your proposals of dropping irrelevant “meta-rules” since both factor out unnecessary clauses. In fact, the DLP-style rules will contain fewer variables and thus would lend themselves to better cardinality estimates.
* The ontology in Figure 3 is OWL Full since the some-values restriction is not described in RDF using a blank node. See, e.g., (This is not necessarily a problem but may raise the eyebrow of some readers, so it's worth noting.)
* The implementation details are quite longwinded (e.g., mentioning specific Jena RETE classes) and difficult to understand, though I grant that such detail might be useful for someone wishing to re-implement the system. Maybe move the more elaborate detail to an appendix? Or link to the source instead?
* The Java methods used to measure memory are not particularly reliable. freeMemory() is an approximation and totalMemory() can change at any time if the heap space is not fixed for the JVM. Running gc() 20 times might be helpful but gc() doesn't block on call. (That said, the results are probably sufficiently accurate for the purposes of the paper, particularly when averaged over 10 runs.)

[1] Thomas Neumann, Guido Moerkotte: Characteristic sets: Accurate cardinality estimation for RDF queries with multiple joins. ICDE 2011: 984-994
[2] Barry Bishop, Atanas Kiryakov, Damyan Ognyanoff, Ivan Peikov, Zdravko Tashev, Ruslan Velkov: OWLIM: A family of scalable semantic repositories. Semantic Web 2(1): 33-42 (2011)
[3] Jesse Weaver, James A. Hendler: Parallel Materialization of the Finite RDFS Closure for Hundreds of Millions of Triples. International Semantic Web Conference 2009: 682-697
[4] Aidan Hogan, Jeff Z. Pan, Axel Polleres, Stefan Decker: SAOR: Template Rule Optimisations for Distributed Reasoning over 1 Billion Linked Data Triples. International Semantic Web Conference (1) 2010: 337-353
[5] Vladimir Kolovski, Zhe Wu, George Eadon: Optimizing Enterprise-Scale OWL 2 RL Reasoning in a Relational Database System. International Semantic Web Conference (1) 2010: 436-452
[6] Jacopo Urbani, Spyros Kotoulas, Jason Maassen, Frank van Harmelen, Henri E. Bal: WebPIE: A Web-scale Parallel Inference Engine using MapReduce. J. Web Sem. 10: 59-75 (2012)

Review #3
By Christophe Dupriez submitted on 07/Jun/2013
Minor Revision
Review Comment:

This is a very good article explaining how rules based system may be better used in field distributed systems. It brings me to the idea that better management of ontologies and rules by humans (based on better feedback from automated systems) is essential for real life applications requested to have a long term performance (applications disaster planning / management for instance).
My only concern is the ontology behind the article! As it is very broad, it would be very helpful to add a glossary of essential concepts and acronyms used with a short indication of how those concepts relate to others. A one page schema of this would support the user wanting to dig efficiently in each aspects covered by the article.
Congratulations for the work done!