A Survey on Interaction Design with Large Language Models for Ontology Requirements Elicitation with Competency Questions

Tracking #: 3866-5080

This paper is currently under review
Authors: 
Yihang Zhao
Xi Hu
Timothy Neate
Albert Meroño-Peñuela
Elena Simperl

Responsible editor: 
Dagmar Gromann

Submission type: 
Survey Article
Abstract: 
Competency questions (CQs) are essential in ontology engineering (OE), as they express an ontology's functional goals and serve as a foundation for its construction, evaluation, and reuse. Large language model (LLM)-based systems for CQs elicitation have recently attracted substantial attention. These systems can generate thousands of candidate CQs from domain experts or knowledge sources to help define the boundaries of a target application domain. However, current interaction paradigms fall short in supporting knowledge engineers auditing target domain boundaries to ensure that CQs are neither too few, risking critical omissions, nor too many, resulting in information overload. This gap reflects the absence of supports for processes that closely align with the divergent (lateral) and convergent (vertical) thinking observed in the arts and creativity domain. We therefore present a systematic literature review (N = 50) investigating interaction design patterns in LLM-based systems that support divergent–convergent thinking in the arts and creativity domain. We then map the identified patterns to the context of CQs elicitation and propose an interaction model that extends the existing OntoChat system, enabling knowledge engineers to navigate a set of candidate CQs on an interactive canvas, supporting exploration, evaluation, and informed decisions about what to add, discard, or revise in order to progressively define the scope of the target domain.
Full PDF Version: 
Tags: 
Under Review