A Semantic Framework to address the Evolution of Semantic Models for Condition Monitoring in Industry 4.0

Tracking #: 3028-4242

Authors: 
Franco Giustozzi
Julien Saunier
Cecilia Zanni-Merk

Responsible editor: 
Guest Editors SW for Industrial Engineering 2022

Submission type: 
Full Paper
Abstract: 
In Industry 4.0, factory assets and machines are equipped with sensors that collect data for effective condition monitoring. This is a difficult task since it requires the integration and processing of heterogeneous data from different sources, with different temporal resolutions and underlying meanings. Ontologies have emerged as a pertinent method to deal with data integration and to represent manufacturing knowledge in a machine-interpretable way through the construction of semantic models. Moreover, the monitoring of industrial processes depends on the dynamic context of their execution. Under these circumstances, the semantic model must evolve in order to represent in which situation(s) a resource is in during the execution of its tasks to support decision making. This paper proposes a semantic framework to address the evolution of semantic models in Industry 4.0. To this end, firstly we propose a semantic model (the COInd4 ontology) for the manufacturing domain that represents the resources and processes that are part of a factory, with special emphasis on the context of these resources and processes. Relevant situations that combine sensor observations with domain knowledge are also represented in the model. Secondly, an approach that uses stream reasoning to detect these situations that lead to potential failures is introduced. This approach enriches data collected from sensors with contextual information using the proposed semantic model. The use of stream reasoning facilitates the integration of data from different data sources, different temporal resolutions as well as the processing of these data in real time. This allows to derive high-level situations from lower-level context and sensor information. Detecting situations can trigger actions to adapt the process behavior, and in turn, this change in behavior can lead to the generation of new contexts leading to new situations. These situations can have different levels of severity, and can be nested in different ways. Dealing with the rich relations among situations requires an efficient approach to organize them. Therefore, we propose a method to build a lattice, ordering those situations depending on the constraints they rely on. This lattice represents a road-map of all the situations that can be reached from a given one, desirable or undesirable. This helps in decision support, by allowing the identification of the actions that can be taken to correct the abnormality avoiding in this way the interruption of the manufacturing processes. Finally, an industrial application scenario for the proposed approach is described.
Full PDF Version: 
Tags: 
Reviewed

Decision/Status: 
Major Revision

Solicited Reviews:
Click to Expand/Collapse
Review #1
By Melinda Hodkiewicz submitted on 06/Mar/2022
Suggestion:
Minor Revision
Review Comment:

1 Overview comments

This is an interesting proof of concept paper that tackles an existing problem for industry. It is a practical and pragmatic use of formal industrial ontology.
The framework presented captures the temporal aspects of sensor data and uses stream reasoning coupled with classical reasoning as a mechanism to achieve this. The unification of different semantic technologies to solve a real-world problem is impressive and we expect this to generate interest in the industrial ontology community.
We have a number of suggestions to improve the paper. We hope the authors find them useful.

2 Suggestions on the context and framing

The following suggestions in this section are based on one of the reviewer’s 30+ years in condition monitoring and maintenance. These suggestions are intended to assist the authors to better place this work in the context of where industry is today and to help industry readers appreciate where this work will add value.

Process and condition monitoring of discrete and continuously operated machines has been used for decades now. Most modern machines have a large number of sensors installed by the Original Equipment Manufacturer (OEM) as well as the additional sensors added by the manufacturing train operator. These sensors are wired into a DCS/PLC and the data available to operators and engineers through SCADA, OSI-Pi and other interfaces. Sophisticated time-based models for early fault detection are available and quality control is sophisticated. The work proposed here is not moving into a vacuum where nothing exists. Industry 4.0 ideas, as exciting as they are for academics and industry consortia, are proving very slow in adoption, for a number of reasons as documented in [1]. With this in mind, we miss any comparison with current methods of condition
and performance monitoring to identify fault states and provide suggestions for action such as you are proposing.

The key goals you set for the semantic model developed in this paper are (page 3):

• The ability to integrate data from different sensors, including information about sensor values that indicate abnormal behaviour.
• Data is annotated with time of occurrence and validity.
• The streaming data must be able to be processed in a timely manner.
• Relationships between situations must be understood in order to understand the effect of proposed actions.

We suggest that some of the desired capability, listed above is already present in modern manufacturing plants. It is already present in the plants of a number of international operators one of the reviewer’s works with ( to varying degrees). However it has taken decades to do this and every machine was set up one by one. Rules and limits proliferate and are difficult to keep track of. Is it possible that the ontology you are proposing would enable new machines to be set up more quickly, through replication and standardisation. Would this also lead to improved maintenance of the systems as currently the set up on each machine is unique requiring lots of time and experience to a) set up, b) make changes, and c) to troubleshoot. Would your ontology allow similar machines but from different manufacturers and different sensor naming systems to have one set of common decision logic. If this were the case then we suggest there would be considerable interest from existing plant operators in what you are proposing.

That said there are a number of things you describe that are an advance on current practice. For example making information updates about the relationship(s) between processes, entities and locations in the factory dynamic, that are very interesting.

3 Abstract

The abstract is easy to follow and motivates reading the paper.
Having said this we ask you to consider the suggestions about the context made earlier. While it is fashionable to talk about Industry 4.0 as something completely new, manufacturing machines and process plants have been instrumented with sensors for performance and condition monitoring for decades. Many of these are hard wired into DCS systems and and the data available through PLCs and SCADA as mentioned earlier. It would make you paper relevant to a much wider udience if you could frame it as being useful for the manufacturing industry right now with their existing sensor and communications set-ups regardless of whether they have adopted Industry 4.0 practices and protocols.

4 Introduction

Can we suggest the authors incorporate some of the suggestions made about context and framing in this section? In addition we have the following comments.

• In the introduction, the authors outline the core goals for the framework (data integration, time representation, etc). However, the authors should refer to these (perhaps by numbering them) throughout the paper. This will show the reader where the real gaps in the literature are (according to these goals) and how these goals are addressed in the proposed framework.
• For example, “efficiency” is one of the goals of the framework. Upon reading this, my
impression was that reasoning performance / efficiency would be a core feature that would be addressed in the ontology (and your use of stream reasoning appears to suggest this). If this is the intention, we would like to see further evaluation of the reasoning performance of the ontology. For instance, does reasoning complete in the order of milliseconds, seconds, minutes or days (as is the case for some classical reasoning problems) for a large volume of seed data. Upon reading the rest of the paper, it appears the authors were in fact referring to process efficiency of equipment diagnosis and recovery. Referring to your goals throughout the paper should help to eliminate confusion here.

5 Related work

• A quote from [1] might be useful “Condition-monitoring data alone is often not sufficient for PHM; metadata about the asset, its operating environment, and the external covariates that influence its deterioration would also be required.” Your proposal to include these operating context features in your ontology is a key contribution of your paper. Can we suggest the authors make clearer in this section that to do prediction well in industry the model needs to be sensitive to changes in asset’s environmental variables (as these impact the response in the prediction time window). Few assets operate in an unchanging operating, maintenance, environmental context. Therefore models based only on data history do not generalise well to unseen contexts.

• There are also a number of more recent survey papers than the 2010 one you mention [23] that might be worth including instead.

• Please can we suggest a more comprehensive literature review on the state of the art in ontology processing of streaming data. While some work is ongoing by Siemens and Bosch, for instance, I am sure they and others have moved forward and it would be good to reflect where they are compared to what you propose [2, 3, 4].

• Your approach to identifying ‘situations’ sounds similar to the work being done by ontologists in the autonomous driving world. As an example, Bosch have been developing requirements “Globally, if person [is detected] then in response brake [eventually initiated] within 5 time steps” which are then translated into temporal logic [5]. This is similar to the reasoning you are proposing ”if oil temp exceeds 40 deg for more than 20 seconds then ..”. Their work also mentioned the need to take the external world into consideration and demonstrates how they do so. I appreciate that the paper I am talking about is in the Industry track of ISWC so only two pages. Nevertheless it does indicate that there is work going on in this area that is relevant and I’d suggest other readers would like you to place your work in this context.

• While a clear gap in the literature from an industrial setting appears to have been identified, this section is also missing a review of semantic / ontological works for in IoT / sensor technologies outside of the industrial domain. For example, there is much work in IoT ontologies for ubiquitous and pervasive computing (used in domains such as aged care).

• Examples of works in this area include:

• ONDAR: an Ontology for Home Automation (Achraf Lyazidi and Salma Mouline)
• SOUPA: Standard Ontology for Ubiquitous and Pervasive Applications (Harry Chen, Filip
Perich, Tim Finin, Anupam Joshi)
• A review can be found at: A study of existing Ontologies in the IoT-domain (Garvita Bajaj,
Rachit Agarwal, Pushpendra Singh, Nikolaos Georgantas, Valerie Issarny).

• It would also be good to see more of a background on stream reasoning and where it has
been used in the past. This will give the readers a clearer view of the technologies used in the framework.

We have a suggestion to remove the section “Other approaches use data mining and machine
learning methods to extract diagnosis knowledge or mine rules from databases in a smart system.

These approaches include the works of Martinez et. al. that uses an artificial neural network based expert system for detecting the status of several components in agroindustrial machines using a single vibration signal [28]; the works of Liu et. al. that use support vector machines and rule-based decision trees for fault diagnosis of water quality monitoring devices [29] and the works of Antomarioni et. al. which use association rule mining in maintenance [30] to minimize the probability of breakages in an oil refinery [31]” discussed above. The rest of this section stands on its own without it. The reasons are below.

Somewhat baffled why you chose these papers from the 20,000+ research papers on prognostics and predictive maintenance published each year. Why these three examples? Have they been implemented in industry? Were they ever validated in industry? How does their selection (over thousands of others) help your case?

There is a serious issue with labelling (annotating) data for predictive models for use in industry. Most of the research is done in the lab on benchtop rigs or using one of very few public datasets.This issue of labelled data for predictive models is now being more widely talked about and recognised as a key constraint for industry. It is one of the reasons rule-based methods, such as you are proposing are likely to continue to be used [6, 7]. I suggest you include a section on this instead.

Vibration [28], as an example here, is problematic for examples such as you are proposing later on for a number of reasons. Sample rates for the raw signal are 2000 Hz, whereas the data you are using in your example comes from temperature, current, speed and power which all have sampling rates on the order of seconds or minutes apart. Engineers can use a vibration RMS value (aggregated to give similar second intervals) but that’s not much used for predictive diagnostics where we are interested in change in spectral band energies and peak amplitudes at certain frequencies hence vibration RMS is mainly used in protection. The vibration features that are useful have to be extracted using FFT and other spectral techniques and while this is one of the promises of edge computing, deployment of these devices is still in their early stages.

If you are interested in the work of Grabot [30] and other similar groups on semantic extraction of information necessary for predictive model labels such as failure modes and end of life events you might consider a search for the works of Mike Brundage et. al. at NIST on Technical Language Processing of Maintenance Work Orders, also the work of Rajthapak at General Motors who has been combining semantics and ontologies for warranty data for some years now.

6 A novel knowledge-based framework for condition monitoring

The system and technologies used are well described and we commend the authors for their re-use of existing ontologies.Some suggestions for improvements follow.

• There is a problem in the definition of Resource. Resource is defined as a Manufacturing Facility or Staff or Product. This is not a good definition because while it may work for this use case, it is not able to be integrated with other ontology.

Integration of the data with different data sources is one of the core motivations for an ontology. We encourage to authors to think about how different ontologies use the term ”resource”. To retain the reasoning capability enabled in this definition, we suggest a subclass of resource (i.e. Condition Monitoring Resource) that includes this axiom.

• The same goes for the DL axiom for process. This is problematic because it says that a
process is a subclass of Logistic Operation or Human Operation or Manufacturing Process.
However, in Figure 2, it says that these process types are sub-classes of Process. We agree with this but the DL definition should be rewritten.

• The authors should be careful on the bottom of page 11. The authors say that ”the concept of a situation is formally defined by the following DL axiom” and present a subsumption axiom. A subsumption axiom is not a ”definition”, logically speaking.

• On page 13, the authors say ”the modules involved in situation detection are Translation and Temporal Relations. The ”Translation” module has not been introduced until now. Do the authors mean ”Location”?

• How does the reasoner perform when the asset is deliberately switched off e.g. for maintenance, or if one of the sensors goes offline? Do you continue to get the “Not detected” status on each situation?

• Is getting the “Not detected” status every x minutes for every situation going to take up a lot of memory in the operating system? Especially as you scale over 100s of situations and machines.

• We are interested in the mechanics of how the ontology evolves and would appreciate some more details. Our understanding is that new rules will need to be added manually if a situation is encountered whereby a cause can’t be found. Is this correct?

7 Proof of concept

• It would be good for readers to understand more about the Hierarchy of Situations in Figure 9 is used. It appears that this is not part of the ontology. Rather, it is a visual model that is inspected by an expert to make a decision. Is this correct? This should be made very clear to the reader. If so, are there plans in future work to create an ontological representation of situations based on lattice theory?

• One of the main challenges with machine-to-machine based work is that industrial rule-based diagnostic systems are (context, asset, and process)-dependant in the sense that they rely on specific characteristics of individual pieces of equipment in that part of the circuit. This dependence poses significant challenges in rule authoring, reuse, and maintenance by engineers [8]. You seem to have this problem in your example as well. As you show in Tables 4 and 5, the constraints must be developed first by engineers who know what sensors are on the machine and what the sensor values should be. If you have hundreds of non-identical machines or even identical machines with different ages/ behaviours etc, then I don’t see how what you are proposing is any more efficient than current approaches. For example, if we have to develop a model like shown in Figure 9 for each situation and for every machine, how is this any better than the functional models we currently have built into our SCADA systems? What is the value for putting all the data you have in Table 4 into an ontology when it is already captured in the SCADA? Why would anyone replace what is already working? Please can you address this.

• We are interested in how long it took for the authors to set up the rules for their case study.

• Would the rules, once developed for this machine, be transferable to other similar machines? If so, might this be a benefit of this approach?

8 Evaluation

The authors have presented a framework that solves a real-world problem. However, such a
thoroughly considered framework deserves a more thorough evaluation.

• The authors have presented a verification activity of running the reasoner and evaluating using OOPS guidelines. It would we good to see more validation activities performed as part of your evaluation.

• For example, it would be good to discuss how the ontology held up to the current use case, and what is missing? How does this ontology compare with other similar ontologies? What is different to existing ontologies at a concept level and why?

• Something that could be very interesting to readers is a performance evaluation as mentioned earlier. To see how quickly the reasoning is performed using the stream reasoner will be very interesting. You are dealing with high volume sensor data. Can current reasoners hold up to these requirements? We appreciate that this is a large piece of work. If this is a subject of future work, the authors should say so.

• We suspect that some further ontological decisions will be made on an examination of the ontology’s performance. For example, is the partOf relationship in the ontology transitive? If not, was this a performance-related decision? If so, how does this affect performance?

9 GitLab Repository

The authors are to be congratulated for making their work available on a GitLab site (https://gitlab.insa-rouen.fr/fgiustozzi/STEaMINg-SR_SitDet)

• Please give your repository an Open Source license (as the authors claim that the ontology is open source) so that readers can use it freely.

• The GitLab requires more comprehensive run instructions (perhaps in the IDE used by
the authors). We have tried in both VSCode and Eclipse (both with Maven installed) and
have not been able to run without configuration errors. It appears we are missing a csparql dependency in the repository as we are getting the following error in Eclipse: “The POM for eu.larkc.csparql:csparql-core:jar:0.9.6 is missing.” We could be missing something on our end but the publication will benefit from some clear instructions to help readers to run your code.

Here are the references we have used in our review. We hope you find them useful.

[1] D. Kwon, M. R. Hodkiewicz, J. Fan, T. Shibutani, and M. G. Pecht, “Iot-based prognostics and systems health management for industrial applications,” IEEE Access, vol. 4, pp. 3659–3670, 2016
.
[2] E. Kharlamov, T. Mailis, G. Mehdi, C. Neuenstadt, ¨ O. ¨ Oz¸cep, M. Roshchin, N. Solomakhina, A. Soylu, C. Svingos, S. Brandt et al., “Semantic access to streaming and static data at Siemens,” Journal of Web Semantics, vol. 44, pp. 54–74, 2017.

[3] E. Kharlamov, G. Mehdi, O. Savkovi´c, G. Xiao, E. G. Kalaycı, and M. Roshchin, “Semantically enhanced rule-based diagnostics for industrial internet of things: The sdrl language and case study for Siemens trains and turbines,” Journal of web semantics, vol. 56, pp. 11–29, 2019.

[4] G. Mehdi, E. Kharlamov, O. Savkovi´c, G. Xiao, E. G. Kalayci, S. Brandt, I. Horrocks, M. Roshchin, and T. Runkler, “Semdia: semantic rule-based equipment diagnostics tool,” in Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, 2017, pp. 2507–2510.

[5] A. P. Kaleeswaran, A. Nordmann, and A. ul Mehdi, “Towards integrating ontologies into verification for autonomous driving.” in ISWC Satellites, 2019, pp. 319–320.

[6] A. Theissler, J. P´erez-Vel´azquez, M. Kettelgerdes, and G. Elger, “Predictive maintenance enabled by machine learning: Use cases and challenges in the automotive industry,” Reliability engineering & system safety, vol. 215, p. 107864, 2021.

[7] D. Correa, A. Polpo, M. Small, S. Srikanth, K. Hollins, and M. Hodkiewicz, “Data-driven approach for labelling process plant event data,” International Journal of Prognostics and Health Management, vol. 13, no. 1, 2022.

[8] G. Mehdi, E. Kharlamov, O. Savkovi´c, G. Xiao, E. G. Kalaycı, S. Brandt, I. Horrocks, M. Roshchin, and T. Runkler, “Semantic rule-based equipment diagnostics,” in International Semantic Web Conference. Springer, 2017, pp. 314–333.

[9] A. Mehdi, E. Kharlamov, D. Stepanova, F. Loesch, and I. Grangel-Gonz´alez, “Towards
semantic integration of bosch manufacturing data.” in ISWC Satellites, 2019, pp. 303–304.

[10] M. Pech, J. Vrchota, and J. Bedn´aˇr, “Predictive maintenance and intelligent sensors in smart factory,” Sensors, vol. 21, no. 4, p. 1470, 2021.

Reviewers: Melinda Hodkiewicz and Caitlin Woods, University of Western Australia.

Review #2
Anonymous submitted on 10/Apr/2022
Suggestion:
Major Revision
Review Comment:

This manuscript presents a complete framework for monitoring and decision making in the setting of a manufacturing plant. The authors introduce a new ontology, for which they reused previously published ontologies from the industry domain and extended the already existing concepts with a situation, resource, and process modules. The framework is tested by applying it to a small use case, which is discussed in detail. However, there are multiple open questions and discussion points, which I detailed further below. The major issues are the concept of ontology evolution combined with the missing related work on Description Logic extended with a time component and the usage of a lattice instead of OWL/DL. With the missing related work, the originality is hard to judge. Further, the significance of results can not be judged either, because a case study does not present results as such. This paper would greatly benefit from a comparison of the same case study applied with other state-of-the-art systems. The quality of writing is good, but some issues/errors have been identified and are detailed further below.

P1: Concept of Time and ontology evolution seems to be misguided:
The concept of ontology evolution is being used in an unusual way. Ontology evolution rarely relates to the changing situation of a resource. The application of monitoring and decision support seems much more appropriate for an automaton, instead of ontology evolution. Hence, the question arises, why not use a similar approach like in [a], specifically in Chapter 7.
The only part which would be considered evolution is the addition of newly discovered causes to the ontology, not however the changes in situations which are found using the observations of the sensors with the application of stream reasoning.
It is important to add the missing related work and most importantly to explain why the presented approach was taken instead of using an automaton.
[a] http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.302.513&rep=rep...

P2: Lattice usage:
Why the usage of a lattice, why not use OWL to model the constraint - situation behaviour accordingly? Since the relation R is already in the ontology? At the same time, why is the relation T not in the ontology?

P3: Terminology
Page 14, lines 29-33: There seems to be a contradiction. First situations are mentioned to either be desirable or undesirable, but later the authors talk about situations which lead to failures. Also, in Figure 1, there is always only abnormal situations which are passed on between modules. Are those all situations, or only those marked as abnormal. And does abnormal equal undesirable and leading to failures? Further in Figure 2, I do not see a property which would mark a situation as desirable, abnormal or undesirable.

Other detailed comments:

- Page 3, line 49 and page 4, line 5: In section 4 —> In Section 4, Finally, section 5 —> Finally, Section 5. Like you capitalize when referring to a specific figures, e.g. in Figure 1, this should also be done for sections.
- Page 4, line 10: interrupts —> interrupt
- Page 4, lines 20-25: Description of the section does not really match the actual structure that follows. Why is there a separate discussion?
- Page 4, line 37: what does ANN stand for?
- Page 6, line 13: area vs. domain
- Page 6, line 22-23: OntoSTEP, the acronym explanation does not match with the acronym
- Page 6, what is an upper ontology?
- Page 8, Table 1: add citations to each of the ontologies and either author name or some other indicator for [50]. Maybe also highlight the ones which are reused later.
- Page 8, line 17-18: “none of the existing ontologies provide a way of representing knowledge that evolve in time.”
- Page 9, line 20: “methodology proposed in [54]”. More information would be beneficial here. Does the methodology have a name, what is the gist of this methodology, and why was is chosen?
- Page 9, line 23: “definiton” —> “definition”
- Page 9-11: Why are the different modules presented in a list, where each item has multiple paragraphs? Because the list ends at the end of page 11, it is not clear that the text after no longer belongs to the list.
- Page 12, Table 2: top or bottom of page. This also applies to other tables and figures, especially in the later parts of the paper.
- Page 12, line 46: “It both uses the semantic model and modifies it.” This sentence is grammatically weird, because of the location of “both” —> “It uses the semantic model and modifies it as well.”
- Page 14, lines 14 and 21: “Decision Making” vs “Decision making”. Also, on page 22 lines 10 and 12.
- Page 14, line 47: “It is worth mentioning again two aspects about ...” This sentence is grammatically not correct.
- Page 17, lines 20-22: why is the definition of a lattice not given earlier?
- Page 19, line 47: missing comma: ... or users, however, it is ...
- Page 21, line 15: “add this fact to the ontology”, what is meant by fact? The cause of an abnormal situation?
- Page 21, line 23: “... in section 3.4” —> “... in Section 3.4”
- Page 21, line 24: “The figure 9 shows ...” —> “Figure 9 shows ...”
- Page 22, line 49: What are the “results” of the case study? In the previous section, there is only a discussion of the case study, not any particular results (or at least not presented as such).
- Page 22, line 51: “off-line”. Previously, it was spelled “offline”.

Review #3
Anonymous submitted on 16/May/2022
Suggestion:
Major Revision
Review Comment:

In this paper, the authors propose a condition monitoring system in Industry 4.0 which uses a real-time machine monitoring data model using an ontology model, developed as part of the study, classical reasoning based causal analysis, and decision support based on the lattice structure. As claimed by the authors, the system is demonstrated with a case study. The uniqueness of the proposed system is in its address of two concerns: 1) how to incorporate the context of the execution in the monitoring of industrial processes and 2) how to capture the evolution of knowledge in the semantic model. Furthermore, it appears that the proposed algorithm based on building a hierarchy of situations and constraints in a lattice structure to find the root cause of some abnormalities and then helping in high-level decision making for future actions is a crucial contribution of this study. Although there are recent research in the field of digital twin and IoT which has proposed semantic model for real-time data based monitoring, the constraints driven analysis of situation and causal analysis presented in this paper is novel. The details of various components and the inner working of the system are presented in detail with a professional writing style and suitable diagram. The state of the art presented in the paper is impressive and covers the relevant fields in sufficient detail. Although the case study is a proof of concept and doesn’t use real industrial data, it is sufficient to illustrate the primary functions of the system and provide a robust foundation for further research in predictive maintenance.

However, the following criticisms are for two major claims that the authors make on their development.

Context: The paper frequently mentions context throughout the paper but never explains what context is from the author’s point of view is. In table 1, identity, activity, time, and location as context-related concepts. It is not clear whether these concepts together define the context, or they are the concepts which are defined contextually. If it is the first case, then it begs the question of how these concepts can define context when they can be described differently based on different contexts themselves. E.g., the neck’s location can be above the chest or below the head. If it is the second case, it is not known then what is the source of the context? E.g., observer, modeler – I guess this is not the position the authors take. Anyways, the authors should include a clear definition of the concept in this paper to make things clear.
“According to the context in which a manufacturing process is executed, the rules that manage the process can change. In order to represent these contexts and the fact that a machine is performing a process in these contexts, the semantic model needs to evolve in time to represent this changing knowledge.” – Authors distinguish between Activity and Process, but it is not known how they are different. Also, the second statement is ambiguous as representing the contexts and representing the temporal change are two different concerns and it is not explained how they are related.
“Our COInd4 ontology is a foundation of our approach. It represents the elements of the real factory, such as machines, processes, and sensors, with special emphasis on modelling the context of the operation of these elements. … The goal of this semantic model is to represent the concepts and relations in the industrial domain to enable context representation and reasoning.” – in the COInd4 ontology model, the authors tackle the context in three different aspects: parthood, location, and time. The authors describe the parthood of line, cell, machine, and workstation as the source of the context. However, not enough parthood definitions are given for each of these concepts. Therefore, it is difficult to understand what are the different contexts from which they can be described. e.g., why should a machine be described differently based on the line it is part of? What are the characteristics of the machine that changes when it is part of line A vs Line B? For process, contextual information is occurring in time, place and related resources, as per the authors. But these are essential relata of the Process as every process must occur during some time, at some location, and has some participants (Resources in this case). If the context changes (assuming what the authors consider as contextual information), i.e., the time, place, or resource changes then the process instance is trivially a different process instance. It is not understood, how the same process instance can be described differently in different contexts (again taking the view of the authors).
“The Time module comprises all information related to the current time and allows time-stamping all the context information that may change over time.” – if time is itself contextual, as previously mentioned, is time information also time-stamped? Furthermore, it is also not understood whether time information is embedded in the semantic data or used for timestamping (as metadata) for semantic data. e.g., “p1 hasDuration t1” has a different connotation than “p1 hasDuration t1” timestamped with t1. The former is just a fact whereas the latter may denote that the same fact is valid (true) at time t1. This is extremely crucial to clarify as the case study presented in the paper doesn’t use the detailed space and time information model presented in the ontology.

Evolution of semantic model: The study is headlined under the concept of “evolution of semantic model” and takes the central role in the framework to deal with the evolution of semantic models for condition monitoring in Industry 4.0. As commonly understood, the evolution of the semantic model is about managing the change in the ontology(s). However, the authors declare: “The first possibility is related to a change in the structure of the model itself, i.e., addition/removal of concepts and relations. This type of change is studied in the field of ontology evolution [7–10] and is not addressed in this paper.” Instead, the authors state “The second possibility is related to the addition of instances to concepts already defined and to the addition of relations over existing instances. One example can be the addition of a physical resource to the factory that would be reflected as a new instance of the corresponding resource concept in the semantic model.” – it is not known why authors consider the second possibility as the evolution of the semantic model. Could they refer to some earlier work on this? Critically, it is required to distinguish between semantic model and semantic data. The addition and deletion of instances to the database (knowledge graph in this case) may not change the model of the data. Also, this is done ubiquitously (by SPARQL INSERT/DELETE, and SWRL) and is trivial in that sense. The term “semantic model” is used in a broader perspective than Ontology, but it is still a data model, and can be informally described as “Semantic data model is a high-level semantics-based database description and structuring formalism (database model) for databases. (Also see Johan ter Bekke (1992). Semantic Data Modeling. Prentice Hall.)”. The data structure/model is an abstraction of the data; therefore, it is not prudent to define the insertion or deletion of data points (following the same model) as the evolution of the semantic model. The framework and case studies included in the study use the common semantic data management principle and have their merit and do not require the embellishments such as the repeated reference to the ‘evolution of the semantic model’.

In addition to these two major criticisms of the studies, some other problems can be observed in the methodology:
1) It is not understood why the authors embarked on developing a new ontology even after identifying some of the related ontologies in the SoTA. For sake of interoperability, ontologies must be developed by reusing the existing ontologies as much as possible. COInd4 ontology doesn’t reuse any concept from the ontologies mentioned in Table 1 but reuses some new ontologies which are not mentioned in the SoTA. Could they not extend the existing ontologies with extra concepts required for this application? Furthermore, the Authors are requested to mention ROMAIN ontology in their SoTA. ROMAIN (https://content.iospress.com/articles/applied-ontology/ao190208) is specifically developed for condition monitoring for industrial maintenance.
2) COInd4 ontology does not use any Top-level Ontology (BFO, DOLCE, EMMO, UFO etc.) or any other upper-level ontology. This is a major problem as the new ontology will create another knowledge silo which is against the recent trends of ontology-based interoperability. Please see the third iteration of FAIRsFAIR recommendation and OntoCommons project.
3) “The ontological model is developed according to the methodology proposed in [54].” – Uschold and Gruninger’s methodology propose Competency Questions (CQs) which are customary for publications related to Ontology to include in the paper. The authors need to propose these CQs and validate the ontology based on the CQs. OOPs! tool only tests the structural and logical validity and best practices but is not sufficient to test whether the model can indeed able to represent the data in a way that satisfies the user’s requirements.
4) The covering axioms on Resource, ManufacturingFacility and Process do not infer that covered entities are subclasses, however, they are shown as subclasses in Figure 2. Moreover, the covering axioms are problematic as it prevents any other entities to be a subclass. E.g., one may want to add a Factory or Vehicle as part of a ManufacturingFacility which this ontology will not allow.
5) Are the subclasses disjoint? Without a disjoint axiom, an instance can be both a Staff and a Product (as an example), which is not intended.
6) “… the context of a Line and depicting the context of a Workstation that belongs to that Line” – The model does not tell what can be part of what. If Line, Cell, Workstation, and Machine are not constrained for the parthood then one can state that a line is part of a machine, which may not be true.
7) It is not defined how the sibling classes can be distinguished from each other. E.g., Why LogisticOperation is different from HumanOperation? Is a forklift operated by an operator a LogisticOperation or a HumanOperation?
8) Why sosa:Observation is not a subclass of Process. As per the model, a Process is performed by some Resource and an Observation is made by some Sensor which is hosted by some Resource. But why not Sensor is also a resource as it can also be classified as a machine. Otherwise, authors need to provide a clear characterization of Machine, Resource, and Process, as they are extremely generic concepts.
9) What is the difference between locatedIn and isInLocation property? This is related to issue 8. What is the justification for making separate taxonomy for Sensor and Resource?
10) Similarly, what is the difference between hasTime and hasDuration? This is related to issue 8. What is the justification for making separate taxonomy for Observation and Process?
11) How geo:Feature and sosa:FeatureofInterest are different or related?
12) What does it mean by “… the abstraction of physical spatial place”? Does it include matter or is a complete vacuum?
13) geo:Geometry is not defined but if a Point is a type of geo:Geometry then what is the implication of applying RCC8 mereotopological relations to two Points? E.g., what does it mean if one says: point A is partially overlapping point B? Moreover, these relations are not used in the example data in the following sections of the paper.
14) “The geo:Feature class represents 3D-objects or 2D-areas” – if every 3D object is a feature then everything under Resource class should also be a Feature as Line, Cell, Machine, and Staff are all 3D objects.
15) If ValidTime hasBeginning and hasEnd some ValidInstant then ValidInstant will also hasBeginning and hasEnd some ValidInstant (as a subclass). Surely, beginning and ending instants for instants are the same as the instant itself, but this unnecessarily goes into a recursive structure.
16) SWRLTO includes Proposition (which has subclass: ExtendedProposition) but in the model temporal:Fact is used. Are they equivalent?
17) “Values of this property belong to the temporal:ValidTime class.” – Do the authors intend to mean Range?
18) “Although this algebra was not originally designed to relate an interval to an instant, nor was it designed to relate two instants to each other, SWRLTO includes specific operators to allow this.” – True but it needs to be explained what the implications are for doing so or use reference of some related work.
19) “There are no specific spatio-temporal built-ins, however, the combination of spatial and temporal operators allows a spatio-temporal analysis.” – not comprehensible. Please provide more details.
20) “A Situation defines an abstract state of affairs associated with a particular scenario of interest” – Highly ambiguous as it is not known what authors mean by state of affairs and scenario of interest. “a situation is a specific scenario…” – if they are specific then how can they be abstract?
21) “a situation is a specific scenario in which the system state shows a particular combination of sensed values for its attributes (observations) that are not desirable and could lead to a failure” – this definition is too specific for a generic concept. Why not a scenario in which everything going as expected (normal) is not a situation?
22) “a situation involves a combination of at least, one resource, eventually associated to its location, and at least one sensor measurement, fulfilling the constraints set by the expert. The whole can be linked through spatial, temporal and/or spatio-temporal relationships.” – the first part (“at least, one resource, eventually associated to its location, and at least one sensor measurement”) is only defined by the axiom but not the latter clauses. This is also related to issue 21. The definition given by the axiom includes both situations when a machine is acting normal or abnormal.
23) Is Cause, Action and Constraint also subclasses of temporal:Fact?
24) The integration among modules is done by equivalence and subsumption but no such mappings are provided. It is important to include these mappings to alleviate many issues mentioned above.
25) The actual source found in https://gitlab.insa-rouen.fr/fgiustozzi/STEaMINg-Ontology has many differences from the model proposed in the paper.
26) Minor correction in sentence framing: “In the following we explain both and describe the interaction between them, as well as with the ontological model.” – please rephrase.
27) What is the difference between the function of Stream Generator and Instance Creator? As “The output streams are RDF streams” the Stream Generator needs to create instances for the subject or object of the triplets. Also the Stream Generator“… stream out semantically enriched data streams that are then consumed by the Stream Reasoner” but the Instance Creator “… creates instances from the received data and inserts them in the ontology,”. Does Stream reasoner read the data from the “ontology” or output of Stream Generator?
28) “…populating the ontological model… ” and “…inserts them in the ontology…” may be replaced by knowledge base or A-Box of the ontology to distinguish data from the model. Related to the “Evolution of semantic model” issue.
29) “A set of queries, which combine background knowledge extracted from the ontology and some relevant parts of the streams…” What is ‘background knowledge’? Please provide examples.
30) “Monitoring observes a discrepancy between the expected and detected behaviour…” – How are the expected behaviours known? Do they pre-exist in the knowledge base?
31) “The purpose of the Cause Determination module is to identify the possible causes that generated a situation detected by the Temporal Relation module. For this, two components are used separately: a Stream Reasoner and a Reasoner.” – Please refer to the architecture diagram in Figure 1.
32) “Therefore, in order to determine the possible causes of a situation classical reasoning approaches can be used. ... This last option has some advantages over the previous one.” – this paragraph is quite ambiguous. Please provide an example to clarify and mention how the latter option is adopted in the methodology.
33) “if we consider variables MC1_Temp and MC2_Temp…” Are these variables ObservedProperty? Otherwise please mention how they are modelled in the ontology.
34) “The R relation is therefore built from the relationships between the situations and the constraints that are extracted from expert knowledge.” – Please enlist what is acquired from the domain experts and what is from real-time data.
35) “These implications are inferred from the order relations (<, >,⩽,⩾), from observations, or extracted from expert knowledge” – Please enlist what is acquired from the domain experts and what is from real-time data. Also what are the meanings of (<, >,⩽,⩾). What is the difference between Clause 1 > Clause 2 vs Clause 1 ⩾ Clause 2.
36) “Considering situations s1 and s2 of the example presented above, the constraints related to both situations are ⌈{s1, s2}⌉ = {c1, c2, c4, c5, c6}.” – Why c3 is not included? s1 R c3.
37) It may be inquired whether the lattice formation algorithm needs to be run every time a decision needs to be taken. If no new situation is defined and they can only be defined by external input (not from real-time data automatically) then it seems that the same lattice structure may be continued to be used. In other words, it begs the question, of whether the lattice structure is susceptible to real-time data. Furthermore, what is the benefit of using the lattice structure when SPARQL using property path may also be used on the fly to detect common constraints between two situations, next situations that may be reached etc.?
38) What is the justification for using C-SPARQL when standard SPARQL may be used for the same purpose if timestamps are included in the RDF graph using hasTime and situationTime relations? Does it make processing faster? Please provide a reference.

In summary, the framework proposed by the authors allows to automate and facilitate condition monitoring and diagnosis and supports decision-making in the manufacturing domain. In this work semantic modelling is used to represent real-time data and expert knowledge in a common data format. Apart from the above-mentioned issues, the authors may want to highlight in the discussion how the four requirements of semantic models (mentioned on page 3) are satisfied by the current application. A couple of ideas that authors may consider are: 1) the scope of semantic interoperability needs to be global e.g., background knowledge from one manufacturer may be shared with another who may benefit in condition monitoring and decision making for the same type of processes and resources. From the context of cloud manufacturing, such condition monitoring and decision making may be offered as services for cross-domain, cross-sectoral condition monitoring and predictive maintenance. Furthermore, the contextual difference in knowledge may play a significant role at such a level of interoperability e.g., some organizations may not view the same situation as severe as others do. 2) The proposed system highly depends on background knowledge, which is collected from domain experts, e.g., situation-constraints, constraints-constraints, and situation-cause relations. Unless such knowledge can be automatically managed (synthesized from real-time data, learned from historic decisions), the system remains limited. Moreover, the evolution of the knowledge base with such automatic learning may require the ontology model to be dynamically changed too. This way, the evolution of the semantic model may be applied in its full glory.