User Experience Benchmarking and Evaluation to Guide the Development of a Semantic Knowledge Graph Exploration Tool

Tracking #: 3482-4696

This paper is currently under review
Roberto García
Juan-Miguel López-Gil
Jordi Virgili-Gomà
Rosa Gil1

Responsible editor: 
Guest Editors Interactive SW 2022

Submission type: 
Full Paper
Despite the increasing amount of semantic data available, there is still a lack of adoption of user-facing applications based on semantic technologies, especially those geared toward the exploration of disparate semantic datasets. Benchmarks have already shown their usefulness leading to advances in different domains and until recently there was not a benchmark of semantic data exploration tools. Using the Benchmark for End-User Structured Data User Interfaces (BESDUI), we explore now how it can guide the development of a new tool for semantic knowledge graphs exploration, RhizomerEye. The results at the current stage of development show better results than those for the RhizomerEye predecessor. However, there is the risk of overfitting the tool to the benchmark, overloading the user interface to produce the best benchmark results but producing an unusable UI. To avoid this, an evaluation with real users has been also conducted, using the same dataset and tasks provided by the benchmark, but involving real users to measure the User Experience (UX) instead of deriving the UX metrics analytically. Moreover, the evaluation has been complemented with the user satisfaction dimension, unmeasurable by the benchmark. Overall the results are promising, showing comparable results to those of the benchmark, especially for users with knowledge about semantic technologies. On the other hand, the evaluation with real users has made it possible to identify potential improvements to RhizomerEye, also taking into account user satisfaction and ways to better suit BESDUI to be used in evaluations with real users.
Full PDF Version: 
Under Review