Review Comment:
First of all, I would like to thank you the author for taking into account my previous comments and considerably improving the quality of the paper (technically but also the writing/structure). I will make some additional comments in each of the sections about this current version:
### Introduction
It has been improved from the first version and now it’s an actual introduction instead of a motivation. However, I think that the contributions of the paper can be clearer (not only the tool but also the resources like the vocabulary, the declarative planning, and the evaluation). Additionally, I was expecting something like a real motivating example or similar that demonstrates the status of other tools/systems and why we need ABECTO.
### Related work
This section is still not a related work section IMO. A related work would be tools/systems/vocabularies and other approaches that are able to calculate accuracy and completeness. Right now, I have the feeling that the current section is more like a background section (with the definition of the concepts, etc.) I’m not saying that it isn’t good (I like it), but it is not related work. It’s still not clear to me why section 2.2 is relevant for this paper, maybe providing a better motivation.
### Requirements
Please try to limit the citation of under-review or rejected papers (e.g., citation [22] of your paper). You have many relevant papers on knowledge graph construction and thesis in the state of the art (see [1,2])! Still, I think that a methodology on how you extract these requirements is required.
You say: “During an earlier comparison of unit ontologies”... are we comparing ontologies or knowledge graphs? Can we perform both comparisons? The same doubts came when I read the first use case, where it’s not clear if you are comparing data or schema.
### ABECTO
Again, the title of the section says ABox Evaluation and Comparison Tool for Ontologies, which seems a bit contradictory. Please clarify this aspect because it’s very important.
It would be good to have some idea of what the built-in reports look like, now it’s difficult to follow.
The description of the vocabulary is considerably better, but claims like “We could not reuse the similar properties owl:sameAs, owl:equivalentClass and owl:equivalentProperty from OWL” need a better justification.
Comparison processors: Please, together with the technical description, can you add examples of each step? It’s complicated to follow. Additionally, review it because there are many values that are not described (e.g., what means the denominator in the estimated completeness measure??)
### Workflow
Is this section required? If it is a tool I would like to have a section (maybe subsection in 4) that describes a bit the technicalities of the tool, not how to run it, but where it is, programming language, if it has unit tests or any CI/CD to ensure its sustainability. license, and its potential impact.
### Use Case applications
This one was the section that convinced me more towards the acceptance of the paper. Now it’s clear that the tool is used/useful by/for others. Please clarify in the first case ontologies/knowledge graphs. In the second add proper links to resources (e.g., I needed to search by myself what Wikidata Mismatch Finder is, and then I understood how ABECTO works in this case). Is there any reason why Space Travel Data is selected? Better motivation or justification, If it’s only an example, it would be good to also mention the general procedure that could be used with ABECTO for enhancing wikidata.
[1] Van Assche, D., Delva, T., Haesendonck, G., Heyvaert, P., De Meester, B., & Dimou, A. (2023). Declarative RDF graph generation from heterogeneous (semi-) structured data: A systematic literature review. Journal of Web Semantics, 75, 100753.
[2] Chaves Fraga, D. (2021). Knowledge Graph Construction from Heterogeneous Data Sources exploiting Declarative Mapping Rules (Doctoral dissertation). https://doi.org/10.20868/UPM.thesis.67890
|