Foundational Patterns Benchmark

Tracking #: 2544-3758

Authors: 
Jana Ahmad
Petr Křemen

Responsible editor: 
Guest Editors Web of Data 2020

Submission type: 
Full Paper
Abstract: 
Recently, there has been a growing interest in using ontology as a fundamental methodology to represent domain-specific conceptual models in order to improve the semantics, accuracy and relevancy of the domain user query results. However, the volume of data has grown steadily over the past decade. Therefore, managing, answering user's queries and retrieving data from multiple data sources could be a significant challenge for any enterprise. In this paper, we show how triple store data compliant with the Unified Foundational Ontology (UFO) can be efficiently queried. We also present foundational patterns benchmark that helps us choosing the most efficient triple store and its layout. We evaluate the foundational benchmark on both generated and real-world datasets for state-of-art triple stores.
Full PDF Version: 
Tags: 
Reviewed

Decision/Status: 
Reject (Two Strikes)

Solicited Reviews:
Click to Expand/Collapse
Review #1
Anonymous submitted on 22/Sep/2020
Suggestion:
Major Revision
Review Comment:

This paper describes queries benchmark using the unified foundational ontology (UFO) and discusses the optimizations they help reduce during query retrieval. The authors make substantial improvements from the first version of the article, such as extending the foundational patterns with more queries (I would not say complex queries since they are not compared to SoA features on query selection and complexity), adding Virtuoso triple store (and now we see from the Section 7 that Virtuoso is better than the other triple stores) and a UFO validator using SHACL Rules. They also answered correctly to almost all of my previous key concerns, but the paper is still missing the crucial answer to this question on benchmark paper “How is this paper not yet another benchmark”
I appreciate the efforts of the authors towards making the paper convincing and of good quality. However, there are still remaining efforts to solve the following main issues:

*Motivation of the benchmark*: The authors mention that the novelty is that there is no existing benchmark based on a foundational ontology. However, the results based on query times suggested that Virtuoso worked better, and there are other studies showing the same conclusion. What makes foundational ontology and datasets really relevant than the DBPedia dataset and queries for example? Or the LDBC Social Network Benchmark? The reader needs facts and/or better conviction on the motivation. Many of the datasets mentioned in the paper are from the same institution.

*Queries selection and complexity*: There is a clear need to align your types of queries with existing SoA studies, as suggested by the editors in the reference. How complex are the new queries? I still can’t assess the queries. Please, provide a detailed review of the queries based on the features in that aspect.

*Evaluation*: Thanks for adding Virtuoso and generating more datasets (10 Million). What about Blazegraph? Or any other triple store that is present in SoA benchmark but not considered in this one? Any motivation here?
In future work, the authors mention to cover more SPARQL features with OWL entailment regimes. What are those features different from the ones already present in some SoA papers? Please develop this point or make it more clear to the reader.

As mentioned in the paper, the UFO-indexing approach is probably the most interesting aspect of this work. I would suggest starting from that and see how the indexing can be adapted to other existing benchmarks (e.g., would it be possible to convert existing dataset to a UFO-compliant and then use the same JAPO library?), so as to compare and position your paper better.

Review #2
By Enrico Daga submitted on 08/Oct/2020
Suggestion:
Major Revision
Review Comment:

This manuscript was submitted as 'full paper' and should be reviewed along the usual dimensions for research contributions which include (1) originality, (2) significance of the results, and (3) quality of writing.

I appreciate the effort made by the authors to fill important gaps in the the article, particularly with relation to the size of the data used in the experiments.

However, I am still sceptical about the value of the contribution. The UFO-indexing approach is not a novel contribution and the benchmark presented does not seem to make sense outside the specific case of testing a number of triple stores with that. In other words, who else would need this benchmark? Future UFO users would just use one of the best triple stores in your results (e.g. Virtuoso), without bothering of performing the bench ark again, right?
Don't get me wrong, it is useful to know which triple store performs better with UFO-shaped data, it's just that the resulting benchmark does not generalise outside UFO and therefore is of limited use.

The claim that existing benchmarks do not pay attention to the shape of data seems not accurate. The shape of data is taken into account at the RDF level, with complex graph patterns, etc… The presented work has a point on generating an ontology-specific benchmark (and this is the most interesting part of the article) but this aspect should be generalised to other ontologies (shapes) and evaluated for its capacity of generating useful and high-quality queries, to be a contribution.

What is the take-away message of the article? The performance of the tested triple stores is comparable to previous research, nothing new on that side. The indexing mechanism makes the queries faster. OK, but this is not surprising (and already published).

I feel appropriate to leave a decision to the editors.

Review #3
Anonymous submitted on 11/Nov/2020
Suggestion:
Accept
Review Comment:

Thanks for the detailed reply from the authors.
All my concerns have been well addressed. So I recommend accepting this paper.