Temporal Relevance for Representing Learning over Temporal Knowledge Graphs

Tracking #: 3557-4771

Authors: 
Bowen Song
Kossi Amouzouvi
Chengjin Xu
Maocai Wang
Jens Lehmann
Sahar Vahdati

Responsible editor: 
Armin Haller

Submission type: 
Full Paper
Abstract: 
Representation learning for link prediction is one of the leading approaches to deal with incompleteness problem of real world knowledge graphs. Such methods are often called knowledge graph embedding models which represent entities and relationships in knowledge graphs in continuous vector spaces. By doing this, semantic relationships and patterns can be captured in the form of compact vectors. In temporal knowledge graphs, the connection of temporal and relational information is crucial for representing facts accurately. Relations provide the semantic context for facts, while timestamps indicate their temporal validity. However, existing embedding models often overlook the intricate interplay between relational and temporal parts of the facts in temporal knowledge graphs. These models tend to focus on effectively representing individual components, consequently capturing only a fraction of the overall knowledge. Additionally, some relations in temporal facts are time-insensitive, while others are highly time-dependent. This complexity reduces the ability of temporal knowledge graph embedding models in accurately capturing these characteristics. To address these challenges, we propose a novel embedding model based on temporal relevance. This model operates within a complex space with real and imaginary parts to effectively embed temporal knowledge graphs. Specifically, the real part of the embeddings of our proposed model captures the semantic characteristics of facts by considering the importance of temporal information and relational information associated with each fact. Simultaneously, the imaginary part of the embeddings learns the connections between diverse elements, without predefined weights. Our approach is evaluated through extensive experiments on the link prediction task, where it majorly outperforms state-of-the-art models. The proposed model also demonstrates remarkable effectiveness in capturing the complexities of temporal knowledge graphs.
Full PDF Version: 
Tags: 
Reviewed

Decision/Status: 
Major Revision

Solicited Reviews:
Click to Expand/Collapse
Review #1
By Luyi Bai submitted on 28/Dec/2023
Suggestion:
Reject
Review Comment:

Temporal knowledge graph embedding is a crucial topic in the field of knowledge graph. The paper studies an embedding model based on temporal relevance. There are some experiments being performed. There are some problems:
1. There are lots of symbols in the paper. Some of them are not being explained. Good to list them in a table.
2. It lacks the framework of the proposed model. In addition, is there any difference between the equations and the existing ones? Please add it.
3. How about the performances in large scale datasets such as YAGO?
4. The most critical problem is that it lacks the recent references and baselines. In fact, there are lots of efforts on TKG published in 2023. The paper did not compare their performances.
Overall, the paper did not highlight their contributions. The proposed model only employs the existing methods.

Review #2
Anonymous submitted on 30/Dec/2023
Suggestion:
Major Revision
Review Comment:

The paper proposes a TRKGE that captures temporal relevance for temporal knowledge graph embedding. TRKGE adopts complex embedding space to separately capture semantic characters and complex relationships among knowledge graph. The paper is well-written and easy to follows. Some suggestions for further improvement are listed as follows:
1) Improve the clarity of the Abstract and Introduction section. Explicitly explain what semantic information and what complex relationships are expected to learn by the proposed complex embedding.
2) Enrich Related Work section with most recent works on TKGE, such as :
Temporal Knowledge Graph Completion: A Survey, IJCAI 2023
Temporal knowledge graph embedding via sparse transfer matrix, Information Sciences, 2023

Review #3
Anonymous submitted on 21/Jan/2024
Suggestion:
Major Revision
Review Comment:

The paper under review introduces a novel model named TRKGE, which integrates temporal relevance into the framework of temporal knowledge graph completion. Leveraging tensor decomposition, TRKGE distinguishes itself by its sensitivity to the temporal attributes of facts, a crucial aspect often overlooked in similar models. The authors effectively illustrate the difference in temporal attributes among various relations, such as the transient nature of "visit" versus the permanence of "daughter of." By constructing the model in the complex space and employing rotation matrices, the paper melts the temporal relevance with the embeddings of entities, relations, and timestamps, further enhanced by an attention mechanism.
The paper is commendably structured, presenting an intuitive concept backed by a robust architecture. The proposed model's performance, particularly in link prediction accuracy, is noteworthy and demonstrates advancement over existing state-of-the-art systems. The authors' innovative approach in partitioning the model into real and imaginary components allows for a nuanced understanding of temporal dynamics within knowledge graphs.
However, the paper needs to improve in several areas. Primarily, the discussion on time complexity needs to be more profound. A more thorough exploration, including experimental results on the training and running times, would have been beneficial for comprehending the model's efficiency. The paper's reproducibility is also limited due to the absence of shared code and detailed technical documentation, such as a system card. These elements are crucial for validating the model's performance and facilitating its adoption in further research.
In light of these observations, a major paper revision is recommended.

Review #4
Anonymous submitted on 10/Mar/2024
Suggestion:
Minor Revision
Review Comment:

(1) Originality: this paper presents an original, incremental work that (a) represents subjects and objects as vectors in high-dimensional complex spaces, (b) represents semantic and temporal relations as diagonal block rotation matrices, (c) transforms the real part of subject vectors with the semantic and temporal rotation matrices and compute the attended sum of both transformed vectors, and finally (d) use the attended vector to compute similarity with the object vector. This framework is an extension of rotation-based knowledge graph embedding methods. It enables modelling the interplay between the semantic of relations and the temporal conditions, i.e., some relations are time sensitive, and some are more permanent.

(2) The experiment results are comprehensive and show minor to moderate increments compared with previous work, especially with TLT-KGE, which also treats semantic relations and temporal relations as separate rotations in the embedding space. It indicates that the improvement comes from considering the interplay between semantic and time by using the attention mechanism, which is also discussed in ablation studies. In general, the experiments serve well to support the claims made in the paper.

(3) Quality of Writing. The paper in general flows well, but with some typos and less clearly explained parts. For example, on page 4, line 29, the "by the two matrix" should be "by the two-dimensional matrix", and the notations of transformed subject vectors $s_{\tau}$ and $s_r$ are bold in the text and not bold in the equations, which is inconsistent. On page 5, from line 37 to line 46, this part does not explain clear enough why the additional relation vector is introduced, why it is combined by element-wise sum, and how exactly it is learned (look-up table of embeddings? Linear layers?), whereas this part is proven to be very critical for the final performance in the ablation studies.

Additionally, there is no such “Long-term stable URL for resources” in the submission. Please kindly point to the code and dataset needed to reproduce the results if available.

As a conclusion, my opinion is that this paper needs minor revision.

Review #5
Anonymous submitted on 23/Mar/2024
Suggestion:
Major Revision
Review Comment:

This paper presented a method for link prediction on temporal knowledge graphs, in order to better capture the intricate interplay between relational and temporal parts of the facts in a temporal knowledge graph. This method was built in the complex space with real parts and imaginary parts to effectively encoding temporal knowledge. Overall, the paper is easy to follow and well-writing. Experimental results showed that this method outperformed or was on a par with other baseline methods in terms of link prediction. Some detailed comments and questions are listed below:

In terms of originality, the method is not super novel; instead, it provides a means of aggregating various useful pieces of information. However, (1) the means of separating real and imaginary parts is a little bit straightforward, if complex space is taken into consideration. Why do you stick with complex space instead of real space? (2) I am not comfortable with Eq. (5). Why do we need to add r_sem from the model design point of view? What is the motivation? How does the model perform differently with/without this term? (3) As you mentioned in the introduction, temporal knowledge graphs are usually incomplete, which necessities the task of link prediction for knowledge graph completion. Have you considered how to deal with missing temporal information? Considering the datasets you used in the experiment, it seems that you assumed temporal information of statements is always there, which is against the reality. Temporal information of statements could be completely missing or partially missing. Furthermore, how do you deal with atemporal statements (e.g., some facts that never changes over time. Earth, rotates around, Sun) as it is very common in a KG. With regard to the experimental results, the results are comprehensive and look promising. Currently, the used datasets exclude such statements. (4) However, authors only considered entity prediction, while the comparisons on the time prediction (given s, r, o, predict time) and relation prediction (given s, o, t, predict ?r) are missing.
(5) missing reference: Time in a Box: Advancing Knowledge Graph Completion with Temporal Scopes