Abstract:
Knowledge Graph Question Answering (KGQA) has gained attention from both industry and academia over the past
decade. Researchers proposed a substantial amount of benchmarking datasets with different properties, pushing the development
in this field forward. Many of these benchmarks depend on Freebase, DBpedia, or Wikidata. However, KGQA benchmarks
that depend on Freebase and DBpedia are gradually less studied and used, because Freebase is defunct and DBpedia lacks
the structural validity of Wikidata. Therefore, research is gravitating toward Wikidata-based benchmarks. That is, new KGQA
benchmarks are created on the basis of Wikidata and existing ones are migrated. We present a new, multilingual, complex
KGQA benchmarking dataset as the 10th part of the Question Answering over Linked Data (QALD) benchmark series. This
corpus formerly depended on DBpedia. Since QALD serves as a base for many machine-generated benchmarks, we increased
the size and adjusted the benchmark to Wikidata and its ranking mechanism of properties. These measures foster novel KGQA
developments by more demanding benchmarks. Creating a benchmark from scratch or migrating it from DBpedia to Wikidata is
non-trivial due to the complexity of the Wikidata knowledge graph, mapping issues between different languages, and the ranking
mechanism of properties using qualifiers. We present our creation strategy and the challenges we faced that will assist other
researchers in their future work. Our case study, in the form of a conference challenge, is accompanied by an in-depth analysis
of the created benchmark.