Modeling and Managing Temporal Obligations in GUCON Using SPARQL-star and RDF-star

Tracking #: 3939-5153

Authors: 
Ines Akaichi
Giorgos Flouris
Irini Fundulaki
Sabrina Kirrane

Responsible editor: 
Cogan Shimizu

Submission type: 
Full Paper
Abstract: 
In the digital age, data frequently crosses organizational and jurisdictional boundaries, making effective governance essential. Usage control policies have emerged as a key paradigm for regulating data usage, safeguarding privacy, protecting intellectual property, and ensuring compliance with regulations. A central mechanism for usage control is the handling of obligations, which arise as a side effect of using and sharing data. Effective monitoring of obligations requires capturing usage traces and accounting for temporal aspects such as start times and deadlines, as obligations may evolve over times into different states, such as fulfilled, violated, or expired. While several solutions have been proposed for obligation monitoring, they often lack formal semantics or provide limited support for reasoning over obligation states. To address these limitations, we extend GUCON, a policy framework grounded in the formal semantics of SPAQRL graph patterns, to explicitly model the temporal aspects of an obligation. This extension enables the expressing of temporal obligations and supports continuous monitoring of their evolving states based on usage traces stored in temporal knowledge graphs. We demonstrate how this extended model can be represented using RDF-star and SPARQL-star and propose an Obligation State Manager that monitors obligation states and assess their compliance with respect to usage traces. Finally, we evaluate both the extended model and its prototype implementation.
Full PDF Version: 
Tags: 
Reviewed

Decision/Status: 
Minor Revision

Solicited Reviews:
Click to Expand/Collapse
Review #1
By Ruben Taelman submitted on 20/Oct/2025
Suggestion:
Minor Revision
Review Comment:

This article introduces an approach to express and assess temporal obligations in UCON.
This approach is an extension of the already published GUCON framework, which did not have temporal support yet.
The article is original, significant, and well written
The article explains the approach formally, together with several use case scenarios that help the reader in understanding the concepts and the approach.
The approach is makes use of RDF/SPARQL-star for encoding temporal aspects,
and is implemented using Apache Jena.
Using this implementation, an empirical evaluation is carried out.

## Strengths

S1. A significant and original approach for expressing and assessing temporal obligations, with formal grounding and an implementation.
S2. Well written paper with use case scenarios that make things clearer for the reader.
S3. Implementation and test cases are provided.

## Weaknesses

### W1. Issues with explanation of experiments and results discussion

The scaling of the benchmark in terms of triple and rule count is not fully clear to me.
It is mentioned that 56 unique rules can be generated based on random predicate.
However, policy sizes of only up to 21 are benchmarked, but why not up to 56?
Furthermore, it is mentioned that the full KB consists of 2.400.000 triples, but the experiments only go up to 1.000.000 triples. Why not up to 2.400.000?
Also, it's unclear here why increments of 108.000 triples are used, and what these increments contain exactly.

I recommend to let the X and Y axis in figures 6a and 6b start on zero.
The non-zero basis origin can be a bit misleading.

I could not find the raw experimental results on https://github.com/Ines-Akaichi/Temporal-GUCON/tree/main
It would be useful to add those in the interest of transparency.

### W2. Reliance on RDF/SPARQL-star

While I agree that the concept of quoted triples from RDF-star is a good match for this work,
I worry a bit on the long-term impact of this paper for relying on RDF-star.
I do not see this as an issue with this work, but rather with the timing of this work.
RDF/SPARQL-star are currently being standardized in the upcoming RDF and SPARQL 1.2 specifications.
While RDF and SPARQL 1.2 are not official W3C recommendations yet (the working group is finalizing), its concepts could already be used.
Since we're probably about to enter a transition phase soon (RDF-star -> RDF 1.2),
I would recommend adding a brief section/paragraph after section 5.1 to explain what the impact of RDF 1.2 will be,
and how this approach can be represented in RDF 1.2.
This will allow this work to be valid in the future when RDF 1.2 has become a W3C recommendation.

### W3. Missing related work on RDF Stream Processing and RDF Versioning

While the related work section gives a good overview of usage control and obligation monitoring,
it lacks an overview of how this work relates to the fields of RDF Stream Processing and RDF Versioning.
While the goals are different, there are similarities between how temporal annotations are expressed,
so it would be valuable to briefly touch upon that.

## Minor issues

- Page 4:
In the definition of a graph pattern, the SPARQL 1.0 spec is cited ([38]). I would recommend to cite the latest recommended version, being 1.1 (without any changes to the formalisations).
- Page 8, Definition 23 (Compliant Knowledge Base at time t):
A compliant KB is defined as the intersection of expired and violated obligations.
If I understand the definitions above correctly, all violated obligations form a subset of all expired obligations.
In that understanding, I would expect the definition of a compliant KB to be simplifiable to just checking if the set of violated obligations is empty.
- Page 8:
Related to the comment above, the following representation has a problem:
≪?n ?a ?r ≫ gucon:startTime tpstart; gucon:deadline tpdeadline.
If different temporal obligations exist for the same triple term, start and deadline might not be groupable anymore.
Is this a problem that can occur in practise?
If these triple terms are guaranteed to be unique, then I guess not.
Adding an intermediary reifier is one way of solving this problem.
(in RDF 1.2, you get this for free using the reifiying syntax)
- Page 1: "SPAQRL"
- Page 3 (figure): "Hopsital"

Review #2
By Sebastián Ferrada submitted on 19/Jan/2026
Suggestion:
Major Revision
Review Comment:

This is an extension paper from a Rule+RR paper published in 2023 by the same authors. The extension involves leveraging RDF and SPARQL 1.2 to write and validate data governance policies. The topic is relevant as there is a need for systems to aid organizations in keeping track of their obligations in an effective and efficient manner.

Even if the paper is relevant and the extension from the original paper is considerable, I have three main issues with the manuscript, which are as follows:

1. The introduced definitions are not sufficiently readable or easy to understand. Even if the paper is intended for policy experts, database professionals implementing systems using GUCON should be able to follow it. The authors can check my detailed reviews to see concrete improvements, but the most important weak points I see in this regard are as follows:
- There is an underutilized running example. The authors spend a lot of time on their health-related example, which is then largely unused. Most definitions in Section 3.2 should be accompanied by a concrete example from the scenarios described.
- Most definitions are either too informal or overloaded with symbols. For instance, a knowledge base is defined as an RDF graph describing the actual set of knowledge. I don't see this as relevant to distinguish the concept of KBs from RDF graphs if they are the same. Then, the difference between an Action Pattern and an Obligation pattern seems to be the appendage of the = suffix. It is not clear what this "O" means or why these concepts must be understood as different. The concept of condition satisfiability should be right after Definition 6, where conditions first appear (and are rather informally introduced as the definition is for obligation rules)

2. The authors use a rather outdated version of RDF/SPARQL-star. Even a year ago, the working draft had stopped considering subject nesting of triples. This is tricky, given that RDF 1.2 is still being worked on, but the risks of relying on non-standard notation are that these things are not settled and may need constant updates. In my detailed comments, I outline the updates that would make the paper acceptable.

3. The evaluation using Thörn's criteria is rather qualitative, even if it could be more mathematical/quantitative. In particular, with regard to formalness, correctness, and usability. Formalness is said to be achieved because of the expressive power of SPARQL. This is done rather informally in the paper. As per correctness, the authors link some test cases, but there are no quality metrics associated with them. The authors don't explain what the aim of the tests is (accuracy, coverage, robustness, succinctness, etc.). Finally, regarding usability, the authors simply argue that the model should be easy to use, with no validation from actual users. In data governance, there are several roles for people interacting with data; even a small poll of people in those roles could provide some (more objective) evidence for the authors' claims. The authors mention that turtle should be intuitive and user-friendly. Is there evidence that people managing these data policies are proficient in turtle/rdf. And even more, RDF 1.2?

Detailed comments:

- The abstract ends with "we evaluate the extended model and its prototype implementation", but there is no mention of what this evaluation found.
- Page 1, line 47: operationalization [of] these
- from page 2 on, there is a typo in the running authors. It says "N. Akaichi et al.", I guess it should be "I. Akaichi et al."
- Page 4, line 1 (and in some other places), it says "graph patterns expressions"; it should be "graph pattern expressions."
- When the I and L sets are introduced, they must be countably infinite.
- To comply with RDF 1.2, blank nodes should be considered in the paper, and thus introduced here too.
- Definition 3 is too informal, and there is no actual difference between an RDF graph and a KB
- Are sets N, C, R pairwise disjoint? Would it matter? Are they countably infinite, too?
- The difference between action and obligation patterns (other than the bold O) escapes me.
- There is an example below Definition 7; however, I argue that it should follow the format of the definition (i.e., be of the form O(n,c,r) and say what n, c, and r correspond to in the situation.
- Definition 8 appears way after condition satisfiability is informally defined, right after Definition 6.
- Page 5, lines 23-24, the authors say that the subset of variables condition is there to ensure that all variables in the obligation pattern are "already bound" by the condition. At this point, there has been no talk about the evaluation of conditions.
- Page 6, line 30. store -> stores. Also, in the example of Dr. Smith being a doctor, I would argue that this fact could, for instance, be revoked and thus change over time.
- Page 7, line 15. I am not sure if \oplus is the standard notation for an XOR.
- Definition 15 says that O\in D. D has not been introduced afaik.
- Definitions 14--23 also need concrete examples based on the scenarios.
- Definition 18 should remind the authors what "the contents" of OR are (i.e., cond t_start, t_deadline, etc.)
- Figure 2 should be divided into three subfigures, and each of them should be explained in the caption. As it is, the figure is too small, and there is not much text explaining what is going on there.
- Definitions 24 and 25 are outdated. For over a year, RDF 1.2 has not considered nesting of triples in the predicate, only on the object (see https://www.w3.org/TR/2025/WD-rdf12-concepts-20250109/).m These definitions, and examples (in Listings 1--4 should be updated to present nesting only in the subject, and ideally use the rdf:reifies predicate with blank nodes.
- Page 9, lines 37 and 45. These examples should be updated to the most recent RDF 1.2 draft.
- I am not sure that Algorithms 1 and 2 are really needed. Algorithm one simply evaluates (which is what I assume the get_mappings function does) all rules and labels them according to their status with respect to their timelines. Algorithm 1, in fact, opens the door to questioning the complexity of this process: the larger t, the larger the snapshot and thus the slower the evaluation. This is not discussed by the authors.
- Page 10, lines 39 and 45. These examples should be updated to the most recent RDF 1.2 draft.
- Page 11, line 19. Should the mentioned KB be a temporal KB?
- I think that Section 5.3 should appear before 5.2, as in 5.2 the authors talk about policies applied to KBs, but other than from the Definitions, the reader has never seen a policy in RDF 1.2 (these appear in the listings of Section 5.3)
- Listings 1 and 2 are crowded. Just use more vertical space. Also, all these listings should be updated to the latest draft of RDF 1.2.
- I think more concrete evidence should be provided for the completeness, correctness, and usability dimensions of the evaluation, as mentioned in my initial comments.
- I think that the empirical evaluation can be kept as is. I don't believe that updating the system would yield times that are too different. However, for the sake of the paper's impact, I recommend that the authors update the implementation as much as possible to the latest RDF/SPARQL 1.2 draft.
- Table 2 is quite hard to understand. I assume the second column contains the number of rules in the policy, and the third column contains the number of triples in the KB?
- As I mentioned before, since the snapshot gets all triples annotated with a time lower than or equal to t, the snapshots get larger as time passes. Yet, the evaluation does not consider increasing values of t. This limitation should be explicitly addressed and/or evaluated through dedicated experiments.
- The x-axis labels of the charts in Figure 6 have the same issues as Table 2.

Review #3
By Julián Arenas-Guerrero submitted on 23/Jan/2026
Suggestion:
Minor Revision
Review Comment:

This paper presents an extension of the GUCON framework by incorporating temporal properties into obligation modeling. The work addresses the lack of formal semantics for temporal obligation monitoring. The authors demonstrate how RDF-star and SPARQL-star can be leveraged to represent temporal obligations and propose an obligation state manager prototype that tracks obligation states (active, fulfilled, violated, expired, not satisfied) over time. The paper is well written and all artifacts are openly available on GitHub.

Although my knowledge on usage control policy is limited, I hope the comments below help improve the paper.

# EVALUATION

## THORN'S CRITERIA
Regarding the usability criteria, the authors mention that "the syntax of GUCON rules are deliberately concise". Using SPARQL-star results in a more consise syntax compared to other forms of reification. However, I do think that the SPARQL-star syntax is concise in an absolute sense. I would explicitely clarify that conciseness here refers improvements over other reification approaches rather than to conciseness in general. Similarly, the authors describe Turtle as a "human-readable serialization". Again, the fact that Turtle is more readable than other formats such as N-Triples, it does not make it genuinely human readable for non expert users. In fact, the authors themselves acknowledge the need for policy editors to support the definition of policies, implicitly recognizing that SPARQL and Turtle are not very intuitive for non-expert users.

## PERFORMANCE & SCALABILITY
It is not clear to me why only high-selectivity rules were used in the experiments. Also, I wonder whether it would be possible to compare against ODRL implementations. The experiments show linear scaling, which is good. The use of a larger KB (current one comprises 2.4M triples) could have evidenced scaling to larger real-world use cases.

# RELATED WORK
The use of comparison tables between languages could help readers understand the differences between languages, e.g., comparing formal semantics, temporal reasoning, state of affairs, obligation states, standardization, implementations, adoption, ...

# RDF-STAR / RDF 1.2
As RDF 1.2 is not a standard yet, I do not consider the use of RDF-star instead to be questionable. However, I think that, given the advanced state of RDF 1.2 specification, I think the paper could also comment whether the proposed approach could be implemented in RDF 1.2 once the standard is finalized. This would be also good for RDF 1.2 as it shows a use case for the new RDF standard.

# REFERENCES
SPARQL-star and RDF-star are not referenced: Hartig, AMW 2017 should be cited at the beginning of Section 5.1. Likewise, RML is currently referenced with a URL, but a bibliographic citation (Iglesias-Molina et al., ISWC 2023) would be more appropriate. Also, other languages are referenced in the paper with URLs footnotes; if bibliographic references are available they would used instead.

# MINOR TYPOS AND FXES
- Abstract: may evolve over times -> time
- Abstract: This extension enables the expressing of -> expression of
- Abstract: and assess their compliance -> assesES
- Page 1: for operationalization these conditions -> operationalizing
- Definition 5: Let O denotes -> denote
- Page 5: A obligation -> An obligation
- Page 6: Our KB store -> stores
- Page 9: Rule conditions may also contain SPARQL-star triple pattern -> patternS
- Page 15 (Table 1): Sceanrio -> Scenario
- Page 18: W3C group -> W3C Community Group (to differentiate from working groups)
- The caption of several figures, tables and listings are missing the final dot