ABECTO: Assessing Accuracy and Completeness of RDF Knowledge Graphs

Tracking #: 3443-4657

Authors: 
Jan Martin Keil

Responsible editor: 
Guest Editors Tools Systems 2022

Submission type: 
Tool/System Report
Abstract: 
Accuracy and completeness of RDF knowledge graphs are crucial quality criteria for their fitness for use. However, assessing accuracy and completeness of knowledge graphs requires a basis for comparison. Unfortunately, in most cases, there is no gold standard to compare against. As an alternative, we propose the comparison with other, overlapping RDF knowledge graphs of arbitrary quality. We present ABECTO, a command line tool that implements a pipeline based framework for the comparison of multiple RDF knowledge graphs. For these knowledge graphs, ABECTO provides quality annotations like value deviations and quality measurements like completeness. This enables knowledge graph curators to monitor the quality and can help potential users to select an appropriated knowledge graph for their purpose. We demonstrate the usefulness of ABECTO for the improvement of knowledge graphs with two example use case applications.
Full PDF Version: 
Tags: 
Reviewed

Decision/Status: 
Reject (Two Strikes)

Solicited Reviews:
Click to Expand/Collapse
Review #1
By Edgard Marx submitted on 20/May/2023
Suggestion:
Accept
Review Comment:

The author provided answers to all of my questions as well as missing information on the article.
The tool is open available through Github and the results are provided in Zenodo.
Without further due, my decision follows the previous recommendation which is accepted.

Review #2
Anonymous submitted on 06/Sep/2023
Suggestion:
Minor Revision
Review Comment:

First of all, I would like to thank you the author for taking into account my previous comments and considerably improving the quality of the paper (technically but also the writing/structure). I will make some additional comments in each of the sections about this current version:

### Introduction
It has been improved from the first version and now it’s an actual introduction instead of a motivation. However, I think that the contributions of the paper can be clearer (not only the tool but also the resources like the vocabulary, the declarative planning, and the evaluation). Additionally, I was expecting something like a real motivating example or similar that demonstrates the status of other tools/systems and why we need ABECTO.

### Related work
This section is still not a related work section IMO. A related work would be tools/systems/vocabularies and other approaches that are able to calculate accuracy and completeness. Right now, I have the feeling that the current section is more like a background section (with the definition of the concepts, etc.) I’m not saying that it isn’t good (I like it), but it is not related work. It’s still not clear to me why section 2.2 is relevant for this paper, maybe providing a better motivation.

### Requirements
Please try to limit the citation of under-review or rejected papers (e.g., citation [22] of your paper). You have many relevant papers on knowledge graph construction and thesis in the state of the art (see [1,2])! Still, I think that a methodology on how you extract these requirements is required.

You say: “During an earlier comparison of unit ontologies”... are we comparing ontologies or knowledge graphs? Can we perform both comparisons? The same doubts came when I read the first use case, where it’s not clear if you are comparing data or schema.

### ABECTO
Again, the title of the section says ABox Evaluation and Comparison Tool for Ontologies, which seems a bit contradictory. Please clarify this aspect because it’s very important.

It would be good to have some idea of what the built-in reports look like, now it’s difficult to follow.
The description of the vocabulary is considerably better, but claims like “We could not reuse the similar properties owl:sameAs, owl:equivalentClass and owl:equivalentProperty from OWL” need a better justification.
Comparison processors: Please, together with the technical description, can you add examples of each step? It’s complicated to follow. Additionally, review it because there are many values that are not described (e.g., what means the denominator in the estimated completeness measure??)

### Workflow
Is this section required? If it is a tool I would like to have a section (maybe subsection in 4) that describes a bit the technicalities of the tool, not how to run it, but where it is, programming language, if it has unit tests or any CI/CD to ensure its sustainability. license, and its potential impact.

### Use Case applications
This one was the section that convinced me more towards the acceptance of the paper. Now it’s clear that the tool is used/useful by/for others. Please clarify in the first case ontologies/knowledge graphs. In the second add proper links to resources (e.g., I needed to search by myself what Wikidata Mismatch Finder is, and then I understood how ABECTO works in this case). Is there any reason why Space Travel Data is selected? Better motivation or justification, If it’s only an example, it would be good to also mention the general procedure that could be used with ABECTO for enhancing wikidata.

[1] Van Assche, D., Delva, T., Haesendonck, G., Heyvaert, P., De Meester, B., & Dimou, A. (2023). Declarative RDF graph generation from heterogeneous (semi-) structured data: A systematic literature review. Journal of Web Semantics, 75, 100753.
[2] Chaves Fraga, D. (2021). Knowledge Graph Construction from Heterogeneous Data Sources exploiting Declarative Mapping Rules (Doctoral dissertation). https://doi.org/10.20868/UPM.thesis.67890

Review #3
Anonymous submitted on 01/Oct/2023
Suggestion:
Major Revision
Review Comment:

Overall, I am happy to see a very improved version of the paper. I think that most of the comments were addressed in the paper.

I have concerns regarding the usability and potential adoption of the ABECTO tool within the broader community. While the concept behind ABECTO is promising and addresses an important need in the assessment of knowledge graph quality, there are several factors that may hinder its adoption:
- ABECTO, as described, appears to be a command-line tool with a pipeline-based framework. The tool's complexity may limit its accessibility to a wider audience.
- Establishing a feedback loop with users is essential for continuous improvement. The authors should consider how they plan to collect feedback, address issues, and incorporate user suggestions to enhance the tool's usability and functionality.

>> Regarding my comment on section 8.
But in this case, if we focus on the best fit this could lead to overspecialization. Knowledge graphs tailored too precisely to one application may struggle to find relevance beyond that context, limiting their potential for broader reuse.
A knowledge graph with a broader scope may serve the interests of a larger community and maybe it is not of a bad quality at all if it is not covering deeply one specific domain.

>> Reading the introduction, I think other important things are missing. In the Motivation section, which was renamed Introduction, I appreciate the efforts made to improve it according to my suggestion. However, there are still a few other things missing. First, the definition of the knowledge graph is a good idea but to provide a clear definition you could use and reference a reputable source like the book “Knowledge Graphs” Aidan et al 2021.

Additionally, the authors have rightly mentioned their previous work, but it is crucial to explicitly highlight the differences between their prior research and the current contribution. It is important to clarify which is the delta w.r.t your previous work.

>>while it's important to make innovative contributions to knowledge graph quality assessment, it's equally vital to build upon existing knowledge and concepts. This approach ensures efficiency and prevents unnecessary duplication of effort. One notable example is the work by Zaveri, who has already made significant strides in defining accuracy and completeness metrics for RDF data. Moreover, it's essential to emphasize the differences between different data models, Relational vs RDF data, and how their structural characteristics influence the way metrics are defined. This analysis is already done by Zaveri et al. so it is crucial to use it and not reinvent the wheel starting again from Wang and Strong.

>> please consider citing the following paper for scalability: A Scalable Framework for Quality Assessment of RDF Datasets

Review #4
Anonymous submitted on 01/Oct/2023
Suggestion:
Major Revision
Review Comment:

Even though the quality of the paper has been improved. The reviewers have several concerns such as the impact of the tool and the motivation for building the tool which should be clear from the beginning. The related work should be improved to situate your work according to the existing methods. Overall, the paper still needs to be improved significantly. The authors can make a fresh submission to the main System & Tools track after taking into account the comments thoroughly.