Investigating Knowledge Elicitation Automation with Large Language Models

Tracking #: 3868-5082

This paper is currently under review
Authors: 
Sherida van den Bent
Romana Pernisch
Stefan Schlobach

Responsible editor: 
Guest Editors 2025 LLM GenAI KGs

Submission type: 
Full Paper
Abstract: 
Knowledge Elicitation, the process of extracting and structuring expert knowledge, is crucial for fields ranging from Artificial Intelligence (AI) to decision support systems. Traditionally, this process has relied on human experts, making it time consuming and resource intensive. With the rapid advancement of Large Language Models (LLMs), there is growing interest in their potential role in Knowledge Elicitation and ontology generation. This research investigates the feasibility of using LLMs, specifically ChatGPT v4, for automated Knowledge Elicitation and compares AI-led approaches to traditional human expert interviews. To evaluate this, a series of interviews were conducted with both human experts and an LLM, and the extracted knowledge was transformed into RDF ontologies using different pipelines, ranging from AI-generated to human-created ontologies. The research employs OQuaRE metrics and structural analysis to compare the generated ontologies against a base-truth ontology. The results indicate that AI-led interviews are more time-efficient and structured compared to human expert interviews. However, a human approach works better for ontology generation: AI-generated ontologies are more standardized but missed a lot of data, whereas human-created ontologies captured more information. These findings suggest that a hybrid approach, using LLMs as stand-ins for experts in the interview phase, while relying on human knowledge engineers for ontology generation, offers the best balance between speed and quality in Knowledge Elicitation.
Full PDF Version: 
Tags: 
Under Review