Ontologies for Observations and Actuations in Buildings: A Survey

Tracking #: 2232-3445

Iker Esnaola-Gonzalez
Jesús Bermúdez
Izaskun Fernandez
Aitor Arnaiz

Responsible editor: 
Guest Editors Sensors Observations 2018

Submission type: 
Survey Article
The IoT allows connecting the physical world with virtual representations in various domains, and its rapid adoption lead to an exponential growth of the number of existing devices worldwide. Likewise, the amount of data generated is expected to grow accordingly. As a matter of fact it is estimated that in 2019, the IoT will generate more than 500 zettabytes in data. Without connecting all these data with its underlying semantics, the users of the Web of Things may end up in information silos thus hindering the exploitation of the data for better and smarter decisions. The main goal of this survey points towards ontologies involved in conceptualizations of observations and actuations, where the utility of that conceptualization arise when some features of interest need to be observed or acted upon. Spaces and elements in the buildings environment have emerged as platforms where materializations of such observations and actuations promise to be very profitable. For each of the reviewed ontology, their fundamentals are described, their potential advantages and shortcomings are highlighted, and the use cases where these ontologies have been used are indicated. Additionally, use case examples are annotated with different ontologies in order to illustrate their capabilities and showcase the differences between reviewed ontologies.
Full PDF Version: 

Minor Revision

Solicited Reviews:
Click to Expand/Collapse
Review #1
By Michel Böhms submitted on 21/Jun/2019
Minor Revision
Review Comment:

This manuscript was submitted as 'Survey Article' and should be reviewed along the following dimensions: (1) Suitability as introductory text, targeted at researchers, PhD students, or practitioners, to get started on the covered topic. (2) How comprehensive and how balanced is the presentation and coverage. (3) Readability and clarity of the presentation. (4) Importance of the covered material to the broader Semantic Web community.

Really excellent paper, good overview, good analysis!
small things:
- maybe explain bit more the way DUL regions cover also values for properties ie that it also covers datatypes beyond spacetime stuff
- idem. voor descriptions being social objects
- fig1: add semantics for lines, dotted line instantiation eg
- 3.1. intro: design bases: mutually disjunct? (I saw later only one selected per option in tabel 1...)
- 3.2.3 is named units...but actuallu always about quantities and units
- p13 footnote format
- p18 RDF/OWL: consider to also mention RDFS
- finally I'd like to see some more content-based conclusions wrt options; they are bit generic now...If we have to choose something now what's best?

Review #2
By Eva Blomqvist submitted on 05/Jul/2019
Minor Revision
Review Comment:

This is a revised version of a previous submission. The previous submission was submitted as a full paper but was actually a combined survey and ontology paper, which has now been split into two papers, where the survey is the one under review here. I think that the authors have done a good job at splitting the paper, extending the survey part with examples etc. So overall I would suggest that the paper is accepted. However, there are a few minor things that could be improved for the final version.

First of all I really appreciate the additional examples in the paper, which really helps both to illustrate the differences between the ontologies and to tie the various parts of the survey together. I do think, however, that some final improvement could be made by better explaining how the examples of the various sections can be tied together to solve an overall modelling problem. As it is now it is not entirely clear. Especially for the building topology example, it is not entirely clear why we need to describe rooms and floors in order to perform/record/analyse observations and actuations. Maybe a short description of the overall "running example" could be put into section 1, in connection with the discussion of DUL relations, where all the example sentences are introduced together with a short explanation how they work together to solve a particular problem.

Further, I appreciate the discussion on the survey scope, and criteria, listed in section 2. However, the authors there say that not all criteria are mandatory and few ontologies fulfil all of them, which again makes it a bit unclear how the selection was made. If some criteria were considered mandatory, while others were optional, please state that. I assume for example that accessibility is a mandatory criteria, while evidence of its use seems not to be mandatory since several ontologies in the survey lacks any such evidence. Moreover, the authors state that LOV was the main source for finding ontologies, but also research paper databases were used. Were the same keywords used for search in the databases? How was the selection made there? A very small note: I would suggest to change the tempus in this section to be past or present tense, rather than future, since this sounds a bit like the survey was not done yet.

In 3.1 an additional set of criteria for ontologies is then presented (ODP-based, modular, model-based, and reengineering based) and used for the survey of observation and actuation ontologies. These criteria are then used as the value in one of the columns of Table 1. My objection here is that the criteria are not mutually exclusive, but rather, several of them are related or even overlapping. For instance, being ODP-based usually also implies being modular (although not necessarily), and reengineering could be true together with any of the other criteria. For instance, SOSA/SSN is definitely both ODP-based AND modular. Hence, I would suggest that each ontology can have several values in this column, or that each criteria gets its own column, and a "yes/no" entry, like in the other tables. An additional question is why these criteria are only used for the observation/actuation ontologies, as far as I understand they could just as well be applied to all the ontologies of the survey.

Regarding sections 3.1.1 and 3.1.2, I think that the differences between the two SSN versions could be described a bit better. There is one very important difference in how they treat observations, which changed from being a dul:Situation into being considered an event (aligned to dul:Event) in the SOSA model, which should definitely be relevant to this paper's discussion (also related to the DUL discussion in section 1) but is not so clearly mentioned in these sections now. This could also be worth mentioning in the summary section, 3.1.10.

It is also the case that not all the criteria of each ontology are mentioned in the text, but some things are inly presented in the tables. It would be better if each criteria is both mentioned in the text AND in the tables. For instance, there is no mention of SSNx use in section 3.1.1 but in Table 2 there is a Yes in that column. Also the license is mostly mentioned when it is not present.

Throughout the paper the terms ontology and vocabulary are used without further mentioning of their meaning. While I think that ontology does not require further discussion, the term "vocabulary" is not used in the same way by all communities. While the linked data community usually equals ontology with vocabulary, in this paper I get the impression that maybe vocabulary means something more like a code list, or am I misinterpreting it?

Further in Section 3.1.2 it is not clear why this ontology suddenly gets a longer example (and so does 3.1.5), that includes more than just covering the example sentence at the start of the section? Is there a certain point that needs to be proven by this? Should the longer example then be used for all the ontologies instead of the short one? Also, I am not sure that the turtle notation in the example is correct: as far as I know you can only use lists in the predicate and object position, not for the subjects of a triple, as in the last two statements.

Is the alternative modelling mentioned at the end of 3.1.2 to model the location of the sensor? Or something else?

Section 3.1.3: isn't it a bit strange to think about a sensor as a process? Or maybe th reader is supposed to consider here the sensing process rather than the physical sensor? Then the naming in the example seems a bit misleading.

Regarding section 3.2 I think that it should be made much more clear that this is a different kind of survey than in the other two parts. As I interpret it this is more an overview of the main standards/most used ontologies to represent contextual information, rather than really a survey (using the same selection criteria as listed in section 2). I think that this section is still very useful, and a good addition that completes the paper and makes it even more useful to the reader. However, it needs to be clear that there is a difference between the sections.

In table 2 it is stated that SmartEnv is missing alignments, while I think that it is both aligned to SOSA/SSN and DUL in some parts.

Section 3.2.1: are there any (popular) alternatives to OWL Time?

The example in section 3.2.2 is missing a period at the end.

The discussion on BIM vs. IoT data in section 3.3 (including footnote 48) is still not so clear to me.

Why is the development tool of ifcOWL relevant (in section 3.3.1 footnote 54)? Development tools are not mentioned for any other ontology.

Example in section 3.3.8 is in the opposite order compared to the other ones, i.e. the room first, why?

When listing the building ontologies I recently came across one that might be worth adding to the list; the RealEstateCore (see [1], [2] and a forthcoming accepted resource paper at ISWC2019). Since this is quite new I did not expect the authors to consider it, but just as an information, and if possible to include in the final version of the paper.

The paragraph about vocabularies in the discussion section (section 4) is not clear to me.

There are still a few typos throughout the paper, but they can be fixed with some proof reading.

Finally, summing up and relating to the criteria of survey papers submitted for SWJ: (1) Suitability as introductory text, targeted at researchers, PhD students, or practitioners, to get started on the covered topic. (2) How comprehensive and how balanced is the presentation and coverage. (3) Readability and clarity of the presentation. (4) Importance of the covered material to the broader Semantic Web community. I find that this paper does a very good job at (1), (2), and (3), as already mentioned above. Regarding (4) the material is not targeted at a broader Semantic Web community, but rather towards a specific domain, which I think is also fine, especially since the paper was submitted to the special issue of sensor observations, where it fits very well.

[1] https://www.realestatecore.io/download
[2] https://doc.realestatecore.io/2.3/core/index-en.html

Review #3
By Ana Roxin submitted on 18/Jul/2019
Major Revision
Review Comment:

This manuscript was submitted as 'Survey Article' and should be reviewed along the following dimensions: (1) Suitability as introductory text, targeted at researchers, PhD students, or practitioners, to get started on the covered topic. (2) How comprehensive and how balanced is the presentation and coverage. (3) Readability and clarity of the presentation. (4) Importance of the covered material to the broader Semantic Web community.

(1) Introduction needs to be re-written, in order to clearly identify the scope of the survey, the survey methodology and the issues addressed. As such the abstract, the keywords and the Introduction fail in correctly specifying what is the goal of the survey. Moreover, no clear and formal methodology has been defined for this survey. Authors only mention that they reviewed ontologies returned by querying the Linked Open Vocabularies repository with different keywords. Authors do not specify what are the exact criterias for reviewing. Also,it is intriguing that while issues related to IoT are discussed in the introduction, keywords related to "sensors" were not included for searching existing vocabularies. The initial version submitted had several Competency Questions (CQ) listed. As requested in the previous review, such CQ should be taken into account for the review of the vocabularies considered. The questions discussed in the introduction about the DUL ontology are never reused in the article. They could have been part of the framework used for review. As such, this framework is never specified nor discussed. Elements mentioned in the Discussion section should've been part of the Introduction.
(2) The vocabularies considered by the article at hand are relatively well-known to the community and the article brings little novelty about these items. All elements provided by the authors and for the considered vocabularies (e.g.about the latest version available, the documentation, the alignments, etc.) are all extracted from the LOV repository. Moreover, the LOV repository contains more meaningful elements: for example on LOV, one is able to see the exact relationships among vocabularies along with their nature (e.g. specialization, synonymy). The article at hand only provides "Yes/no" answers about those elements (e.g. "Alignments Yes" without specifying the ontologies considered for alignment, the type of relationships used, etc.). Finally the column entitled "Use" comes with almost no meaning associated, as one can hardly interpret what a "No" or "Yes" value means in this context. Vocabularies/ontologies related to Observations and Actuations are discussed independently from vocabularies/ontologies about the building domain. The survey methodology seems different. Section 3.2 "Context Ontologies" contains no new information, and is not covering all the contributions published in this matter (and which usually make a distinction between the levels of context addressed e.g. primary context and secondary context).
(3) The article is generally well-written. Still is missing a formal methodology for this survey, explicit benchmarks along with a conclusion related to the issues listed in the introduction.
(4) The article lacks formalization and the vocabularies covered aren't novel. Moreover, the LOV repositories provide more meaningful insights about those specific vocabularies, when compared to the elements provided by the article at hand. The research question to be addressed by this article isn't clearly defined.

Additional general remarks: This article is not a revision of the previously submitted one, it is a totally new paper. The comments and the remarks made by the reviewer in the previous review aren't fully addressed. The article is submitted as a survey, but fails in delivering useful insights.
Example: the SSN vocabulary. The authors point to the SSNO (SSN Ontology https://lov.linkeddata.es/dataset/lov/vocabs/ssno from the W3C Spatial Data on the Web WG) without even mentioning or relating it to the SSN Vocabulary from the W3C Semantic Sensor Network IG (https://lov.linkeddata.es/dataset/lov/vocabs/ssn), while mentioning the QU vocabulary conceived by this Incubator Group.
Most ontologies/vocabularies are relatively old (2016-2017) and are already well-known/commented/reused by the community of practice. The article at hand is also not a survey properly speaking: no formal framework for reviewing has been defined, no survey methodology has been specified. No specific links to the ontologies considered are provided, nor are alignments defined or specified. Expressivity levels are not discussed, nor are the relation between the considered vocabularies and the research question aimed at by the authors. As such, nothing in the article allows establishing the elements listed in the conclusion e.g. "significance of concise representations" (what is a "concise" representation still has to be defined), "the importance of an explicit licence" (underlined by researchers since 2011, but not at all discussed in the context of this article).