Temporal Representation and Reasoning in OWL 2.0

Tracking #: 855-2065

Sotiris Batsakis
Euripides Petrakis

Responsible editor: 
Aldo Gangemi

Submission type: 
Full Paper
The representation of temporal information has been in the center of intensive research activities over the years in the areas of knowledge representation, databases and more recently, the Semantic Web. The proposed approach extents the existing framework of representing temporal information in ontologies by allowing for representation of concepts evolving in time (referred to as “dynamic” information) and of their properties in terms of qualitative descriptions in addition to quantitative ones (i.e., dates, time instants and intervals). For this purpose, we advocate the use of natural language expressions , such as “before” or “after”, for temporal entities whose exact durations or starting and ending points in time are unknown. Reasoning over all types of temporal information (such as the above) is also an important research problem. The current work addresses all these issues as follows: The representation of dynamic concepts is achieved using the “4D-fluents” or, alternatively, the “N-ary relations” mechanisms. Both mechanisms are thoroughly explored and are expanded for representing qualitative and quantitative temporal information in OWL. In turn, temporal information is expressed using either intervals or time instants. Qualitative temporal information representation in particular, is realized using sets of SWRL rules and OWL axioms leading to a sound, complete and tractable reasoning procedure based on path consistency applied on the existing relation sets. Polynomial time complexity of temporal reasoning is achieved by restricting the supported sets of relations to “tractable” sets. Building upon existing Semantic Web standards (like OWL 2.0, SWRL), as well as integrating temporal reasoning support into the proposed representation, are important design features of our approach.
Full PDF Version: 

Major Revision

Solicited Reviews:
Click to Expand/Collapse
Review #1
By Anisa Rula submitted on 10/Nov/2014
Major Revision
Review Comment:

This manuscript was submitted as 'full paper' and should be reviewed along the usual dimensions for research contributions which include (1) originality, (2) significance of the results, and (3) quality of writing.

This paper handles the problem of representing temporal information in ontologies, which cannot represent N-ary relations due to the syntactic restriction of binary relations.

The authors propose an approach that extends on previous works. This approach considers representing both explicit and implicit temporal information through mechanisms such as 4D-fluents or N-ary relationship. The explicit temporal information or the quantitative representation of time as indicated in the paper, refers to time points or time intervals that are represented by the OWL-time ontology. The implicit temporal information or the qualitative representation of time as indicated in the paper refers to the relations between time intervals. The latter one is realized using sets of SWRL rules and OWL axioms

Although the paper is an extension of a conference paper, I find difficult to see the extension and thus the additional contributions with respect to the previous work published by the same authors. In addition, the list of references is not updated recently and the last reference refers to year 2012 while this work was submitted in 2014.

Further, I find the problem of point-based representation as a sub problem of interval-based representation and thus all the rules defined for the interval-based approach can fit to this case. On the other hand as the authors mention in the paper the interval-based representation can be represented by the point-based representation, which, as shown from the experiments, performs better.

Finally, the experiments in section 5.4 do not show any significant result since it was also demonstrated theoretically in other works.

Minor Comments
Section 2.4
- object-predicate-subject -> subject-predicate-object
Section 3.1
- p. 6 check the consistency of names in the text with the names in Fig.4
- p. 6 with the obvious interpretation –> A vague statement
Section 4.1
- The acronym DOS of During, Overlaps and Starts should be also consistent in the formula (eg. see formula 3)
- as a result as follows –> check the sentence
Section 5.3
- p.14 if intervals -> of intervals
- p.14 choose between Figure or Fig.

Review #2
By Chris Welty submitted on 11/Nov/2014
Major Revision
Review Comment:

This manuscript was submitted as 'full paper' and should be reviewed along the usual dimensions for research contributions which include (1) originality, (2) significance of the results, and (3) quality of writing.

I have completed an initial reading of the paper. In general it is readable, despite many grammatical errors, and follows I think a very natural course from the previous work of the authors. I haven't yet dived into the details of the formalizations, but the intuitions are sound and I'm sure any errors would be easy to fix.

I have two concerns at this point, that I would like to open for discussion:

(1) this thread of work (starting with my own contribution of "4-d fluents for owl") has been going on long enough without a real justification. I don't find the motivation very motivational, indeed I think it is today safe to say that semantic web services never materialized and probably won't, and that since no one has every used this style of temporal representation it is doubly unlikely it is actually of importance.

(2) more significantly, while I'm willing to live with (1) above going unanswered - this is an academic publication after all, academic does in fact mean irrelevant - or at least some simple changes to the motivation section, from a scientific perspective I think we need to acknowledge that the formal analysis of the worst-case reasoning complexity is of no actual use. We need to understand whether for real data, the time slice proliferation is a problem. The empirical experiment in the paper is almost silly - random instances do not tell us anything at all. The original 4d fluents work was analyzed from the perspective of a "proof of concept" system that extracted entities, relations, and times from text. This application put a form of limit on the number of time-slice instances that were created - basically no more than two per sentence. But we did not investigate whether those time slices were related to others, except when the time points were ==. In any usage, whether real or hypothetical, we need to understand how many time points and time slices are created. I would think for a travel application, which is an example mentioned in the motivation, there would be a lot of relevant time points and thus a lot of relevant entities: how many are created from the endurants, what is the relationship? Constant, linear, polynomial, etc?

There is real data out there that could be used. Freebase uses the equivalent of reified temporal relations (they call them CVTs), look at for example the set of political appointments:


Click on the "CVT" links to see the reifications with dates. There are 152k marriages in freebase:


many of which have durations. I suggest, quite strongly, that you grab this data, do a simple syntactic conversion, and show how the 4d-fluents approach looks against the reified, and then run the experiments on that data.


Review #3
Anonymous submitted on 30/Apr/2015
Major Revision
Review Comment:

Temporal Representation and Reasoning in OWL 2

This paper presents a work on temporal knowledge representation and reasoning on the Semantic Web using OWL 2 and SWRL.
4D fluents and nary relations are presented in the first part as well as Allen relations.
SWRL inference rules are presented in the second part.
An evaluation is presented at the end of the paper.
This is available as a Protégé plugin.

This paper is interesting and well written, although some errors that must be corrected (see below). In particular there is a misunderstanding of W3C SW standards and of the status of other SW languages.
However, I wonder about the novelty and added-value of the paper because in my opinion, similar work already have been published in a way or another.
In addition, the evaluation part is not convincing at all (see below).


The real name is OWL 2 (not OWL 2.0)

"Formal definitions of concepts and of their properties form ontologies, which are defined using the OWL language"
RDFS or OWL languages

"existing Semantic Web standards (e.g., OWL, SWRL)"
OWL is a standard, SWRL is not a standard

"Description Logics (DLs) are a family of Knowledge Representation languages that form the basis for the Semantic Web standards "
No, the basis of SW standards are RDF, RDFS (i.e. labelled graphs) and SPARQL.
OWL is the standard for rich ontologies, on top of RDF/S.

"The OWL language is based on DLs and it is the basic component of the Semantic Web initiative."
No, see above.

"triplets of the form object-predicate-subject"
triples of the form subject-predicate-oject

"Classes of the object and the subject of a property are abbreviated as domain and range respectively. "
Classes of the subject and the object of a property are abbreviated as domain and range respectively.

"adoption of OWL 2 as the current Semantic Web standard"
adoption of OWL 2 as the current Semantic Web standard for rich ontologies

"Using an improved form of reification, the N-ary
relations approach [19] suggests representing an n-
ary relation as two properties each related with a new
object (rather than as the object of a property). "
Why two properties ? It is (at least) three : subject, object and time.

In Fig 2, using same properties for inverse properties is puzzling, you must argue for this.

"The N-ary relations approach referred to above is considered to be an alternative to the 4D-fluents approach considered into this work."

"Building upon well established standards of the semantic web
(OWL 2.0, SWRL)"
SWRL is not a standard


"CompanyName is static property"
a static property

"each interval interval"

"sameAs OWL keyword"
sameAs OWL property

"In our work, the temporal property remains a property relating the additional object with both the objects (e.g., an Employee and a Company) involved in a temporal relation."

"Enforcing transitive properties is rather involved"
What does that mean ?

"Specifically, when a property is temporal, if the domain of property is ClassA and the range is ClassB (where domains and ranges can be composite class definitions or atomic concepts), then using the N-ary representation the domain becomes ClassA OR Event and the range ClassB OR Event. Compared to 4D- fluents, the disjunction of concepts appearing both in domain and ranges of properties limits specificity of references of the N-ary representation."
You can use two other properties for inverse and you would not have this problem.
Elaborate on reusing same properties for inverse.

"To the best of our knowledge, this is the only known solution to
this problem."
Similar work has already been published, e.g. :
Time-Oriented Question Answering from Clinical Narratives Using Semantic-Web Techniques at ISWC 2010

"The maximal tractable subset of Allen relations containing all basic relations when applying path consistency comprises of 868 relations [18]. Tractable subsets of Allen relations containing 83 or 188 relations [30] can be used instead, "
This notion of tractable subset containing 868 (or whatever) relations is not clear and should be explained.

"Since compositions and intersections are constant-time operations (i.e., a bounded number of table lookup operations at the corresponding composition tables) the running time of closure method is linear to the total number of relations of the identified tractable set. "
This is not fair to talk about linear time as e.g. transitive closure might occur.

Page 11 says that OWL disallow transitivity and disjointness, hence SWRL must be used.
But page 12 says: "The required expressiveness of the proposed representation is within the limits of OWL 2 expressiveness."
In my opinion , there is a contradiction.

The evaluation part uses datasets of 10 to 100 individuals.
This is not realistic to have so few instances !
In addition, Table 3 shows that reasoning time with 100 individuals takes 11 seconds.
What about 1000, 10,000 or 100,000 individuals ?