On the interplay between validation and inference in SHACL - an investigation on the Time Ontology

Tracking #: 3820-5034

Authors: 
Livio Robaldo
Sotiris Batsakis1

Responsible editor: 
Stefano Borgo

Submission type: 
Full Paper
Abstract: 
This paper presents a novel framework for validating the Time Ontology (https://www.w3.org/TR/owl-time). The Time Ontology is currently a W3C candidate recommendation draft and is widely recognized as the “de facto” standard for representing temporal data in the Semantic Web. However, its current axiomatization in OWL is unable to enforce several constraints on temporal data, which are instead captured by the SHACL formalization proposed in this paper. Besides providing a practical tool for processing temporal data in RDF within applications, this paper also offers insights into the combined use of SHACL shapes and SHACL-SPARQL rules to properly capture the interplay between validation and inference in knowledge graphs. Specifically, we demonstrate that SHACL shapes alone are insufficient for validating certain knowledge graphs that can be asserted using the current vocabulary of the Time Ontology. To ensure proper validation, we first compute the inferred knowledge graphs using SHACL-SPARQL rules, then validate the inferred graphs through SHACL shapes. We argue that these findings extend beyond the Time Ontology and apply more broadly, even in the context of more advanced reasoning rules. In light of this, we see our work as a call to action for the Semantic Web community to systematically investigate the representational requirements of different use cases in order to identify the minimal inference rules necessary for validating data in those use cases. The SHACL shapes and SHACL-SPARQL rules that define the proposed framework are freely available on the GitHub repository https://github.com/liviorobaldo/TimeOntologyInSHACL, along with Java programs and clear instructions for processing them.
Full PDF Version: 
Tags: 
Reviewed

Decision/Status: 
Major Revision

Solicited Reviews:
Click to Expand/Collapse
Review #1
By Francesco Compagno submitted on 21/May/2025
Suggestion:
Major Revision
Review Comment:

The paper is mostly well written, although many sentences could be shortened considerably. The topic, enhancing temporal reasoning with Semantic-Web-related technologies, is very relevant for the Semantic Web Journal.

The main body of the paper details a SHACL-based rule system to infer new temporal facts from sets of temporal facts described using a subpart of the vocabulary of the OWL time ontology and find inconsistencies in said sets of temporal facts. The rules the authors supply, taken in combination with the implementative details of the paper, are novel up to my knowledge.

The provided repository is properly organized, and I was able to start using the authors' programs with just minor issues (which are to be always expected in my experience, and don't strictly depend on the authors' work).

From my point of view, the paper has some issues, but one is especially critical, and I will discuss it below. The other issues are less critical, and I will discuss them afterwards.

Critical issue: incompleteness of the inference system and general unclarity in the formal aspects.
In Section 4.4. "Evaluation", the authors state "This subsection further investigates, through a case-based evaluation, whether the proposed shapes and rules are able to properly validate all knowledge graphs that can be built by combining these properties in all possible ways". Again, in Section 5.10. "Evaluation" they state "In order to assess whether the SHACL-SPARQL rules [...] adequately cover all knowledge graphs that may be
constructed we again conducted a case-based evaluation". There are other two evaluation sections, which are similar in nature, but lets us focus on the first two.

First, on a general level, I don't understand neither what is a "proper" evaluation nor what are "all possible ways".
Regarding the "proper" validation, it appears that the authors assume there is some "gold-standard" ontology of time, whose vocabulary contains the vocabulary of the OWL time ontology, and such that a validation is proper if and only if it is the same as the validation obtained using this gold-ontology. However, this gold-ontology can be neither the OWL time ontology, because the inference system proposed by the authors is much stronger, nor it can be Allen's interval algebra, whose vocabulary is different (in particular smaller) than the one of the OWL time ontology. Is this gold-ontology perhaps based on some interpretation of time intervals based on intervals of the real numbers line (and if so, are the intervals closed, open, semi-open, or else?), or perhaps it is based on some common-sense interpretation of time intervals (for which then one may incur in difficulties when discussing instants, as discussed e.g. by Allen in his works)?.

Regarding the all possible ways, it appears that the authors are attempting a proof of completeness, in the sense that it looks like they are proving that, given any input ABox, if a fact can be deduced from the Abox using the gold-ontology, then it is also deduced from the Abox using their SHACL-based inference system. In particular, all inconsistency Aboxes for the gold-ontology are also inconsistent for the SHACL-based system. However, it is not clear if it is this that they are actually doing, and the proof (if it is intended to be such) is difficult to follow: for instance, in Section 5.10, they start by describing, at a general level, how they addressed the issue ("we analyzed [...], examining how [...]") then show two exemplificative figures, and so on.
It is difficult to follow the line of reasoning, and this holds a fortiori because one does not clearly know what is being proved.

In fact, if the two sections 4.4. and 5.10. are indeed proofs of completeness, they are apparently wrong.
Consider in fact the following Abox counter example (of course, it is only arguably a counter example because, again, the goal of the proof is unclear):

:i1 time:intervalStarts :i2.
:i2 time:hasBeginning :b2.
:i1 time:hasEnd :b2.

This Abox modifies the picture in Figure 17: I removed the hasEnd object in the down-right corner and added a hasEnd assertion between the node in the top-left (called i1) and the node in the bottom-left (called b2).
I ran the compile.bat and run.bat file [actually, I had to move to a linux machine then modified said files, then install additional jar files dependencies (some 'logback' library); because my Windows machine was such that the command prompt was closing immediately, for some reason].
No SHACL shape was violated. However, the Abox should probably be non-valid: i1 is a proper interval, since it is the subject of intervalStarts; but it is also an instant, since ends at b2, but it also begins at b2, since i2 (node in the top-right corner) is started by i1 and begins at b2, and intervals connected by the intervalStarts relation share beginnings.

The reason why this inconsistency is not found is that the author did not add the rule
x intervalStarts y, y hasBeginning z --> x hasBeginning z
(and also x intervalStarts y, x hasBeginning z --> y hasBeginning z)
In fact, they opted for a more convoluted rule 24, which says:
x intervalStarts te2, x hasBeginning b1, te2 hasBeginning b2 -> b1 hasBeginning b2, b2 hasBeginning b1, b1 hasEnd b2, b2 hasEnd b1
But this rule is only activated when there are (at least) two hasBeginning statements. In fact,

:i1 time:intervalStarts :i2.
:i2 time:hasBeginning :b2.
:i1 time:hasEnd :b2.
:i1 time:hasBeginning :b1.

is correctly identified as invalid, although b1 is not directly related with b2.

Now, one could just modify rule 24, or add other rules, then the Abox above would be found invalid. However, this issue is probably a symptom of a deeper issue in the authors' approach.
In fact, I found other Abox that miss validation, for instance:

:i2 time:inside :ii2.
:i1 time:inside :ii1.
:i1 time:intervalAfter :i2.
:ii2 time:hasBeginning :ii1.

is valid, so is

:i2 time:inside :ii2.
:i1 time:inside :ii1.
:i2 time:after :i1.
:ii2 time:hasBeginning :ii1.

while this one is correctly identified as non-valid (in passing, how is it possible given the above valid Abox? I thought that any 'x after y' would just be replaced by a 'y before x' during evaluation and thus the result should be the same, is it a bug or else? )

:i2 time:inside :ii2.
:i1 time:inside :ii1.
:i1 time:before :i2.
:ii2 time:hasBeginning :ii1.

Given this, I assume it is likely that ther are infinite non-isomorphic Aboxes that are not identified as non-valid.

The latter two evaluation sections appear to be clearer, however, for instance, I found that the following Abox

:i1 time:intervalDuring :i2.
:i2 time:intervalDuring :i3.
:i1 time:after :i3.

is valid, so perhaps there is some problem there also.

In summary, the first two evaluation sections are confusing and convoluted. Are they a proof of something? If so, of what (and are we sure it is correct)? Or are they just a discussion by the authors to showcase that in numerous situations their system makes sound inferences, if so, why are these inferences (especially?) interesting in this context?

This ends my biggest criticism. Now a list of minor points starts (although some are still important).

Equality: if I understand correctly, the solution of the authors to treat equality between instants is to state that they are reciprocally end and beginning of the themselves. Why not a discussion of other possibilities?

Inference, validation, and consistency. The authors talk about validation. This is in fact the name associated to classical SHACL tasks, that is, there is some data, and this data must be validated against a schema. However, what the authors do here is much different: they develop a whole inference system that deduces new facts and find inconsistencies. At this point, does it still make sense of talking about validation instead of consistency checking?

No comparison with other rule systems: the authors de-facto develop a rule-based inference system. However, they do not reference the literature about rule systems (e.g. logic programming, prolog, etc.) except for saying that SWRL is not enough expressive to do some relevant tasks. Given that the paper is 50 pages long, and only two are dedicated to the bibliography, I think that it would be helpful to engage somewhat more with the relevant literature. In fact, the rule system as developed has many weird aspects. One is that the syntax is obnoxiously painful to read, why not opting for something similar to prolog, or classical predicate logic? One could then translate everything in SPARQL CONSTRUCT queries if so desired.
Another is that there is a "focus" on some part of the rule body, determined by the SHACL targeting system, is it useful somehow or could it be removed?
Another is the separation between rules and shapes. In my understending, both are rules, the difference is that the shapes are exactly the rules that will rise an inconsistency, and they are also precisely either rules with no consequent (e.g. A & B -> false, or A & B & C -> false), or rules whose consequent is a statement about the order or equality of xsd literals (e.g. A & B & ... -> t1 = t2 where t1 and t2 are literals). The reason not to include the former type of shapes as rules would be not to deal with negation, I assume. But the statements between literals in the second type of shapes are implemented by using SPARQL filer statements. In my opinion this is very convoluted, and it looks like an exercise in trying to push SHACL and SPARQL to the limit of things that they are probably not especially good to do.
In fact, about this latter point, the authors highlight that the W3C non-normative suggestion would be to first do SHACL validation and run inference, while the author's suggestion is to do the opposite. I can see way the suggestion is to first do the SHACL validation: you can use SHACL to validate data, e.g. when the data is put into in an information system.

Why is SHACL used at all? Continuing from the last point, if one really wishes to do reasoning by triple materialization using SPARCL CONSTRUCT queries, one can dispense of SHACL completely: just replace any SHACL shape of form "for each result r of the query Q raise a warning W(r)" with a construct query adding e.g. the triples "[] a :Inconsistency; rdfs:comment "a contradiction has been found, here is the likely cause: W(r)" " for each match r of the query, and stop the while loop whenever an :Inconsistency is found.

The abstract says "This paper presents a novel framework for validating the Time Ontology" however the paper presents a framework to do inference with the Time Ontology, it does not validate the ontology (e.g. it does not say that the time ontology is good, or bad, correct or incorrect).

"Individuals of Instant refer to exact moments in time while individuals of Interval refer to spans of infinite and contiguous instants between a start and an end instant." Maybe, maybe not, in fact Allen has argued against exact moments in time. Also "infinite" and "contiguous" is probably a contradiction (what would be an example of a couple of two contiguous real numbers?).

In passing, rule 26 says:
sh:targetSubjectsOf time:intervalStarts;
CONSTRUCT{?b1 time:before ?i2}
WHERE{$this time:intervalStarts/(time:hasEnd|time:inside) ?i2.
$this time:hasBeginning ?b1}
but is probably meant to say
sh:targetSubjectsOf time:intervalStarts;
CONSTRUCT{?b1 time:before ?i2}
WHERE{$this time:intervalStarts/time:hasBeginning ?i2.
$this (time:hasEnd|time:inside) ?b1}
otherwise it does not refer to Figure 17.

Review #2
Anonymous submitted on 12/Aug/2025
Suggestion:
Major Revision
Review Comment:

The paper considers the validation of knowledge graphs (KGs) with respect to the time ontology. The authors argue that validating whether a KG is correct with respect to the time ontology must take into account implicit facts, and cannot be suitably done using either OWL or plain SHACL. They propose some SHACL-SPARQL rules that add facts to the KG in a way that errors can be detected.

Unfortunately, the paper critically needs a formal framework. The authors never properly define the task they consider; in fact, they even lack a consistent way to talk about it. In the abstract, they talk about "validating the time ontology" and about "validating knowledge graphs over their vocabulary". They sometimes talk about consistency, sometimes about validation; they apply these terms to KGs, to the whole ontology, to specific terms in the ontology vocabulary, to intervals, ... but do not make precise what each such notion means. They do not define the KGs they consider, and they do not define what it means for such a KG to be consistent or valid with respect to a property in the time ontology or to the whole ontology.

This is particularly problematic when one considers how much effort has been put into having a formal framework for defining inference, consistency and validation in the semantic web. These terms have proper semantics: OWL inference is properly defined on the basis of formal logic; SHACL validation has been formally characterised. This machinery is readily available; why don't the authors use it? Instead of following the formalisation path, the authors provide a sequence of cases with vague explanations and examples, and some SHACL-SPARQL rules that claim to detect these errors. Without an adequate formalism and well-defined semantics, verifying soundness and completeness becomes an excruciating task, resorting to examples, convoluted natural language, and graphs with little or no semantics. This is most unfortunate, and it seriously diminishes the usefulness of the work.

The standardised formalism for doing inference with domain knowledge in KGs is OWL. The authors do discuss the possibility of using OWL, but it is dismissed as insufficient and "intrinsically inadequate". At different points of the paper, different arguments are given to justify this dismissal:
(1) The lack of constructors for comparing and processing temporal entities.
(2) The "purely theoretical" nature and lack of reasoner support for temporal OWL extensions.
(3) The lack of anonymous objects in OWL.

Argument (3) is simply incorrect. Every OWL profile other than OWL RL supports existential restrictions and the introduction of anonymous objects.

Of the other two, (1) is the most repeated argument, and (2) is mentioned somewhat in passing, but as it is closely related, we can consider them together.

Concerning (1), as the authors cite and acknowledge, there are extensions of OWL/DLs that can do temporal reasoning. No specific logic from the literature seems to fully capture what they do, but there are some very closely related formalisms that can definitely be extended and tuned for their purpose. In particular, interval description logics like the ones described by Artale et al in 2015 seem very relevant, possibly extended with some temporal concrete domain. If further expressiveness is needed, the authors could add constructors to the logic. Such a logic-based formalisation would allow for accurate claims and succinct proofs, and would indeed provide an elegant solution to the problem of validating KGs with the Time ontology.

Of course, one cannot count on efficient reasoners for such a logic, as they correctly claim in (2). This is a very good reason to use technologies like SHACL-SPARQL for *implementing* the validation: the formalised tests, once they have been shown to be sound and complete, can be encoded into SHACL-SPARQL. This would result in a much more elegant, general, and impactful contribution. In any case, the lack of efficiently implemented engines is no reason not to use the best tools for proving correctness, let alone to skip over the formalisation altogether.

The paper does make a potentially useful contribution by providing a SHACL-based version of the Time Ontology, although unfortunately, it leaves gaps concerning the way the two versions compare.

The authors claim a second main contribution: novel insights into the interplay between validation and inference in SHACL. I strongly believe that the authors have a point here, and that validation in the presence of implicit facts (implied by ontologies) is a _very_ important topic that deserves much more attention. In that light, this paper provides an interesting and motivating use case.
But unfortunately, there is no further technical contribution that could have a broader impact, as they lack an adequate definition of inference. They advocate for completing graphs with inferred facts before validation, but what does that mean? How many facts? Why would such a completion terminate? If there are several completions, can we pick an arbitrary one for validation? or do we need to test them all? And how computationally costly can that get? There are recent works in the literature considering validation taking into account ontological inference, and the problem is technically much more involved than the authors seem to believe [2].

Another aspect that the authors may want to consider: in the case of temporal data, the problem of prohibitively large or even infinite completions with inferred facts is even more critical. E.g., something that is true in an interval is also true in all of its (possibly even infinitely many, if time is continuous) subintervals. While this can be circumvented for many concrete cases, it is an important topic that requires care; one cannot simply talk about completing a graph with the implied facts. There are whole lines of research dating back to the 1980s that try to understand how to succinctly represent temporal facts in different settings, e.g., by coalescing intervals.

[1] https://aaai.org/papers/9406-tractable-interval-temporal-propositional-a...
[2] Shqiponja Ahmetaj, Magdalena Ortiz, Anouk Oudshoorn, Mantas Simkus: Reconciling SHACL and Ontologies: Semantics and Validation via Rewriting. ECAI 2023: 27-35

The authors use "cycling path" instead of "cyclic paths"; the former is not correct.
A marked PDF with further typos and comments is attached.

Finally, the figures, which aim to represent graphically KGs with inferred triples, are hard to read and sometimes ambiguous. The graphic notation using several one-of arrows to different labels to represent the choice between the labels is counterintuitive and doesn't convey the meaning of possible choices. They provide a good illustration of why it is better to use formal logic with well-defined syntax and semantics instead of informal representations.

Review #3
Anonymous submitted on 26/Aug/2025
Suggestion:
Major Revision
Review Comment:

The paper presents a framework for validating the W3C Time Ontology using SHACL-SPARQL rules. While SHACL has been used for validation in other domains, the authors argue that the application to temporal reasoning, especially in presence of Allen’s interval algebra, is a new challenge. Further, the authors propose also using an inference-then-validation approach with SHACL-SPARQL rules, which is needed for certain cases. The paper also discusses how to handle metadata for temporal relations. The paper provides the Git repository with code that can be run over a sample dataset.

I personally think that setting new challenges for SHACL in terms of practical application is a good idea, since the languages for graph constraints are yet to mature and their development both theoretically (considering recent papers) and in practice (SHACL 2 is an ongoing effort by a W3C working group). Hence, the problem has an interesting theoretical view. The problem is also practical in nature, since the Time Ontology appears to be the current de-facto standard for encoding temporal data. However, in my view the current contribution of the proposal in the paper is not in a state to properly answer the challenge.

The repository with the examples and code is provided.

First I provide general remarks and then I provide some more detailed comments (just too many to list):

# m1. Technical depth and proposed solution.

The paper shows how to encode implicit constraints that hold between timestamps in the Time Ontology and then Allen's algebra properties that are expressed in the same ontology using binary predicates. It is not stated in the paper, but I find the whole design of such an ontology a bit unclear or poorly justified. If every interval necessarily has two time points, then we can derive any of the possible 13 relationships between them, so why would we store such information inside an ontology, and in addition, why would such information be incorrect? Perhaps the designers of the ontology didn’t mean it to be used for reasoning but for some kind of annotation in case of incomplete information. I comment on this issue a bit more later.

Coming back to the main task: it takes two timestamps and either derives a new relationship or checks the existing ones.

First, while the paper claims SHACL in the title, there is not much about SHACL itself. The authors effectively use SPARQL that is just wrapped inside the SHACL target-shape mechanism. However, the more serious problem is that the authors claim the need for SHACL-SPARQL even for constraints that can be expressed with SHACL Core (without SPARQL). That already holds for the first few examples (I listed my counterarguments in the detailed comments). Also, the application of inference rules seems not necessary (at least for the first several that I managed to understand in detail). Perhaps it is convenient, but not necessary in theory. The authors seem to present inference rules as needed since SHACL-SPARQL cannot sufficiently capture the reasoning logic of Allen’s algebra.

Coming back to the formalization of Allens relations, there is already some work on formalizing the semantics of the rules in fact in FO-logic. In particular the authors of https://www.w3.org/TR/owl-time/ Time ontology constraints provided FO logic rules that should hold for the time over ontology. I find these much easier to read. This paper is not mentioned in the work:

- "An ontology of time for the semantic web" by Jerry R. Hobbs, Feng Pan easier to read (the authors should mention this work or cite it)

Overall, the translations proposed in this paper seem to be rather straightforward, most of the time one or two joins over some path queries in SPARQL. I guess that makes the encoding is practically relavant but hard to claim any theoretical insights.

## Suggestions
Regardless, I think redoing the same approach (in case something was missed in the above work, or to provide an alternative approach) doesn’t subtract from the need to solve the problem.
I think the problem should be addressed more fundamentally. In particular, one should do at least two steps:
- first identifying the exact contraints expressision in some higher level logical langauge (such as FOL)
- then discuss and provide concrete implementation in a suitable language (such as SHACL-SPARQL)

This is important for two reasons:
- understanding the challenges and inexpressivity of say core SHACL or some other language, and then the requirements that the language needs to satisfy. For instance, the authors provide descriptions in natural language and then argue that this cannot be done in pure SHACL so it has to be done in SHACL-SPARQL.
- second, this would make the much more re-usable. One can take those formal definitions and provide another way of implementing the problem. For instance, temporal SQL has many powerful features for dealing with time manipulations. Current SHACL-SPARQL description, which very verbose, and I would argue much harder to read for a non-trained SPARQL person

# m2. Correctness of the results

I believe that the authors should be more precise in declaring what exactly the contributions are. Reading the abstract and introduction, I get the impression that the goal is to provide exact boundaries on what can be expressed in SHACL and what requires SHACL-SPARQL. However, the authors only used SHACL-SPARQL even where I believe it was not needed. Already the first example I think can be expressed with just SHACL (see below for details). In fact, there is basically nothing SHACL-related in their work. It is just pure SPARQL, which is Turing complete, so it is not surprising one can do it. Hence, there is no real study on why pure SHACL capabilities are not enough for certain parts of Allen’s interval algebra.

There are parts of the paper that talk about the need for inference rules (inference-then-validation approach), which could be a requirement, but again it is not clear why. For that, one would need a theoretical study to understand the problem, showing in some way (complexity or expressibility) that certain SHACL fragments are not enough and hence we need more expressive ones. Nothing of this has been done in this work.

It is also worth mentioning that the Time Ontology is a bit odd and hence some of the strange complexity of the solutions provided (though this cannot be the fault of the authors). I comment here in case I understood correctly. For example, Instant is a Temporal Entity and it has hasBeginning of the type Instant, etc. This can create potentially unbounded chains of instants and hasBeginnings. This could be easily solved if hasBeginning and hasEnd were just part of Interval instead of Instant. I believe that the paper should comment on this regardless (it is just confusing).

## Suggestions:

* Once we have a formal definition from m1, we can claim (by understanding the formal limits of core SHACL from papers [1’]-[3’], see below) what can be expressed in SHACL core and what requires SHACL-SPARQL and SHACL inference rules. This can be done by a complexity or expressibility argument.
* An alternative argument could be experiments: perhaps the authors code is not the most principled approach, but it could show faster times in practice.

# m3 Succinctness and Quality of Writing

Overall, the paper seems well written in terms of English and overall structure. However, the reading is not always smooth and I struggled sometimes with cumbersome explanations in the later parts. For example, it is not always clear what the authors are trying to explain. That being said, I personally find this paper unnecessarily long and cumbersome for the complexity it deals with (which is in fact very straightforward). I believe that the paper can be significantly condensed.

- As an example, it is not clear what exactly is implemented in SHACL-SPARQL, and what the overall plan is. At first I had the impression that we are only implementing 13 basic Allen relations and checking them against time given in inXSDDateTime. But then much of the work is about talking between relationships between intervals. E.g., relationship between hasBeginning and before and after, or intervalOverlaps and before. How are we selecting those relationships? Are we complete for those relationships, and what is the overall strategy? I cannot find those answers in the text. Maybe it is hidden somewhere, but again, there is just too much text, so the main information is hard to find.

##Suggestion

- Provide a clear overview on the technical sections and their subsections

- Then, I would suggest using abstract syntax for the constraints (e.g., standard ones like Datalog, FOL or extend the one as proposed in [1']). All listed constraints are probably one or two line formulas. Reading a lot of cumbersome SPARQL with very simple logic yet long names makes the logic hidden, while overloading the reader.

- I would start each part with exactly what we are trying to express in simple words, then an example, and then abstract syntax.

- I don’t understand the evaluation subsections (like 4.4). What do you mean by redesigning the shapes? Were they not correct in the first place? Why not write the correct version immediately?

# m4 Experiments
As described above I personally find the design of Time Ontology a bit unclear.

In any case, since the authors already have the code and some simple dataset on their Git (seems like a synthetic dataset), I think it would be worth setting up some experiments on real data and showing errors and what kinds of errors appear in practice. This can also explain how people use the Time Ontology. Please note that SHACL rules can be also seen as descriptive, not only prescriptive (that’s why they have somewhat readable syntax, otherwise we could just use SPARQL entirely).

# Suggestions:

Find a use case where the authors exemplify their reasoning, and better explain the Time Ontology and why it can be used in a wrong way.

------

# Other Comments

- p2 L5. I dont see immediately how Temporal Logics that talks about relative time compares to absolute time that you have in Time Ontology (or I am missing something there)
- p2 l41. I believe there some more foundational papers on SHACL (also more cited) that explains syntax and semantics, while those listed are focused on particular applications. Eg.
- [1']Semantics and validation of recursive SHACL J Corman, JL Reutter, O Savković
- [2']SHACL: A description logic in disguise B Bogaerts, M Jakubowski, Jan Van den Bussche
- [3']Stable Model Semantics for Recursive SHACL. Medina Andresel, Julien Corman, Magdalena Ortiz, Juan L. Reutter, Ognjen Savkovic, Mantas Simkus
- ...
- page 4 l 2 b- I am not sure

- p5 2.2 not clear what is this section about. could we provide an example?

- p5 sec 3. As stated above I personally find the design of Time Ontology a bit odd. Please explain how this ontology is meant to be used and why users can create inconsistent data

- p6 l40: "Therefore, the Time Ontology currently permits any assertion but those that
do not comply with these two disjointedness statements." how this is checked? is there an owl layer, and if yes, are there more constraints there?

- p7 l6. "It is easy to re-implement the RDFS and OWL axioms in the Time Ontology as SHACL shapes or SHACLSPARQL rules. However, not all of these axioms from the Time Ontology have been incorporated into the proposed approach." What OWL fragment it is used here. First OWL can be expressive in general hence that translation could be far from simple? Second OWL is not meant to express constraints in general, and what is the exact purpose in this case...

- p7 l10-16. please rewrite this, why only those and how does it relate to existing owl axioms

- p10 What exactly is NP? I guess vectors but not sure what kind of problem statement

- p11. l13: I believe this can expressed using only SHACL (and dont need SHACL-SPARQL). I believe it is a property of inverse functionality.
- TERGET: range of inXDate
- CONDITION: \le_1 path inXDate^-1.inDate.Top (I used syntax from [1'] for convinience)

- p12 constraints in (8) I believe can be also expressed in SHACL core. In particular, last one one can use lessThan operator from SHACL

-p18 l24 Not clear what is "case-based evaluation". Overall not clear why this section (as commented above)
and similarly for 6.1. on page 36