QALD-10 — The 10th Challenge on Question Answering over Linked Data

Tracking #: 3471-4685

Authors: 
Ricardo Usbeck
Xi Yan
Aleksandr Perevalov
Longquan Jiang
Julius Schulz
Angelie Kraft
Cedric Moeller
Junbo Huang
Jan Reineke
Axel-Cyrille Ngonga Ngomo
Muhammad Saleem
Andreas Both

Responsible editor: 
Guest Editors Wikidata 2022

Submission type: 
Dataset Description
Abstract: 
Knowledge Graph Question Answering (KGQA) has gained attention from both industry and academia over the past decade. Researchers proposed a substantial amount of benchmarking datasets with different properties, pushing the development in this field forward. Many of these benchmarks depend on Freebase, DBpedia, or Wikidata. However, KGQA benchmarks that depend on Freebase and DBpedia are gradually less studied and used, because Freebase is defunct and DBpedia lacks the structural validity of Wikidata. Therefore, research is gravitating toward Wikidata-based benchmarks. That is, new KGQA benchmarks are created on the basis of Wikidata and existing ones are migrated. We present a new, multilingual, complex KGQA benchmarking dataset as the 10th part of the Question Answering over Linked Data (QALD) benchmark series. This corpus formerly depended on DBpedia. Since QALD serves as a base for many machine-generated benchmarks, we increased the size and adjusted the benchmark to Wikidata and its ranking mechanism of properties. These measures foster novel KGQA developments by more demanding benchmarks. Creating a benchmark from scratch or migrating it from DBpedia to Wikidata is non-trivial due to the complexity of the Wikidata knowledge graph, mapping issues between different languages, and the ranking mechanism of properties using qualifiers. We present our creation strategy and the challenges we faced that will assist other researchers in their future work. Our case study, in the form of a conference challenge, is accompanied by an in-depth analysis of the created benchmark.
Full PDF Version: 
Tags: 
Reviewed

Decision/Status: 
Accept

Solicited Reviews:
Click to Expand/Collapse
Review #1
Anonymous submitted on 16/Jul/2023
Suggestion:
Accept
Review Comment:

The authors have addressed all the concerns of this reviewer and this version of the paper has been improved.

A minor comment: on page 9, line 29 please check whether "in this Section" is more appropriate than "in this Chapter"

Review #2
Anonymous submitted on 29/Jul/2023
Suggestion:
Accept
Review Comment:

The authors fully answered my concerns about their work. I thus suggest the acceptance of the paper to the SWJ.