Exploring Term Networks for Semantic Search over Large RDF Knowledge Graphs

Tracking #: 1717-2929

Authors: 
Edgard Marx
Saeedeh Shekarpour
Konrad Höffner
Axel-Cyrille Ngonga Ngomo
Jens Lehmann
Sören Auer

Responsible editor: 
Thomas Lukasiewicz

Submission type: 
Full Paper
Abstract: 
Information retrieval approaches are currently regarded as a key technology to empower lay users to access the Web of Data. To assist such need, a large number of approaches such as Question Answering and Semantic Search have been developed. While Question Answering promises accurate results by returning a specific answer, Semantic Search engines are designed to retrieve the top-K resources on a given scoring function. In this work, we focus on the latter paradigm. We aim to address one of the major drawbacks of current implementations, i.e., the accuracy. We propose *P, a Semantic Search approach that explores term networks to answer keyword queries on large RDF knowledge graphs. The proposed method is based on a novel graph disambiguation model. The adequacy of the approach is demonstrated on the QALD benchmark data set against state-of-the-art Question Answering and Semantic Search systems as well as in the Triple Scoring Challenge at the International Conference on Web Search and Data Mining (WSDM) 2017. The results suggest that *P is more accurate than the currently best performing Semantic Search scoring function while achieving a performance comparable to an average Question Answering system.
Full PDF Version: 
Tags: 
Reviewed

Decision/Status: 
Reject

Solicited Reviews:
Click to Expand/Collapse
Review #1
Anonymous submitted on 03/Nov/2017
Suggestion:
Major Revision
Review Comment:

The paper presents an approach, named *P, for retrieving the top-k answers to keyword queries over RDF knowledge graphs. The aim of the proposed approach is improving the scoring functions in the context of the semantic search. The paper is not well written, the formalization needs improvement and the experimental evaluation is not convincing.
Detailed comments:
1. It is not clear what the difference is between the submitted paper and [28], the paper wherein *P has been proposed. The paper should clearly state the differences.
2. The whole paper need to be rewritten, it is full of typos (see below) and it contains several unclear paragraphs.
a. “Figure 1 shows an excerpt of a KG where a literal vertice vi 2 L (respectively a resource vertice vi 2 R) is illustrated by a rectangle respectively an oval” - needs to be rewritten;
b. Definitions 2,3, and 4 need to be preceded or followed by some comments;
c. Definition 5: “A literal associated with a resource r denoted by label(r), is the literal value respectively the label of the resource” - needs to be reformulated;
d. Definition 6 needs to be preceded or followed by some comments;
e. “As the SCC is a graph, the resources and literal values are connected by paths formed by edges and vertices Fig. 2.” - needs to be rewritten;
f. Definition 9: “This approach avoids the over and the under scoring of frequent or rare tokens”- this is a comment, and it must not be included in the definition!
3. The experimental evaluation is not convincing.
a. The proposed system was able to fully answer only 8 queries over 40. It seems weird. Maybe the choice of the dataset was not appropriate.
b. The results of the proposed approach in Tables 7 and 8 are very poor.
c. Even the results in Table 4 are not impressive.
d. Every table needs to have more comments (how many queries we used, for example. This is not clear for Table 6).
e. The running times are not reported, as the authors say that the focus is the accuracy. It does not seem fair to show accuracy results completely disregarding running times.

Some of the typos:
1. Page 4: in the fuction
2. Page 7: annottated and idetified
3. Page 7: the system perform
4. Page 8: QA systems that uses QA pairs derives
5. Page 8: of this systems
6. Page 9: the an entity
7. Page 11: in a object
8. Page 13: The results achieved by these methods represents
9. Page 13: this systems
10. Page 13: Query 29 consist
11. Page 13: Glimmer and Levenshtein was
12. Page 14: avarage performace

Review #2
Anonymous submitted on 28/Feb/2018
Suggestion:
Reject
Review Comment:

The paper addresses a very important problem in the area of semantic search, namely keyword search over knowledge graph. The paper provides some techniques to index and match resources for a given keyword queries. It is not really clear what is the novelty of the paper. The proposed approach relies mainly on Jaccard coefficient as a similarity metric to retrieve and rank resources. The authors propose some techniques to retrieve the most relevant resources and to rank them. The proposed techniques are nonetheless extremely ad-hoc and do not lie on any principled grounds.

The evaluation set-up is also rather limited. The authors have used QALD-4 to evaluate their system and compare it to state-of-the-art approaches. However, their system was only able to process a merely 12% percent of all queries in the benchmark, which makes the generality of their results and conclusions questionable.

The authors completely ignore a wealth of work on top-k keyword search over RDF knowledge graphs. The list is quite long and so I will give only a couple of examples: "Top-k Exploration of Query Candidates for Efficient Keyword Search on Graph-Shaped (RDF) Data" by Tran et al., "Scalable Keyword Search on Large RDF Data" by Le et al., Keyword Search over RDF Graphs by Elbassuoni et al.. Additionally, the authors fail to identify key works in NL Question Answering such as "Natural Language Question Answering over the Web of Data" by Yahia et al.

The paper also needs some proof reading. It consists of a few typos and grammatical mistakes. Overall, the paper is a nice systems work but it lacks any research insight or lessons. I would advise the authors to re-consider submitting the paper as a systems one.


Comments