Explanation Ontology: A General-Purpose, Semantic Representation for Supporting User-Centered Explanations

Tracking #: 3282-4496

Authors: 
Shruthi Chari
Oshani Seneviratne
Mohamed Ghalwash
Sola Shirai
Daniel M. Gruen
Pablo Meyer
Prithwish Chakraborty1
Deborah L McGuinness

Responsible editor: 
Guest Editors Ontologies in XAI

Submission type: 
Full Paper
Abstract: 
In the past decade, trustworthy Artificial Intelligence (AI) has emerged as a focus for the AI community to ensure better adoption of AI models, and explainable AI is a cornerstone in this area. Over the years, the focus has shifted from building transparent AI methods to making recommendations on how to make black-box or opaque machine learning models and their results more understandable by experts and non-expert users. In our previous work, to address the goal of supporting user-centered explanations that make model recommendations more explainable, we developed an Explanation Ontology (EO). The EO is a general-purpose representation that was designed to help system designers connect explanations to their underlying data and knowledge. This paper addresses the apparent need for improved interoperability to support a wider range of use cases. We expand the EO, mainly in the system attributes contributing to explanations, by introducing new classes and properties to support a broader range of state-of-the-art explainer models. We present the expanded ontology model, highlighting the classes and properties that are important to model a larger set of fifteen literature-backed explanation types that are supported within the expanded EO. We build on these explanation type descriptions to show how to utilize the EO model to represent explanations in five use cases spanning the domains of finance, food, and healthcare. We include competency questions that evaluate the EO's capabilities to provide guidance for system designers on how to apply our ontology to their own use cases. This guidance includes allowing system designers to query the EO directly and providing them exemplar queries to explore content in the EO represented use cases. We have released this significantly expanded version of the Explanation Ontology at https://purl.org/heals/eo and updated our resource website, https://tetherless-world.github.io/explanation-ontology, with supporting documentation. Overall, through the EO model, we aim to help system designers be better informed about explanations and support these explanations that can be composed, given their systems' outputs from various AI models, including a mix of machine learning, logical and explainer models, and different types of data and knowledge available to their systems.
Full PDF Version: 
Tags: 
Reviewed

Decision/Status: 
Accept

Solicited Reviews:
Click to Expand/Collapse
Review #1
Anonymous submitted on 03/Dec/2022
Suggestion:
Accept
Review Comment:

The proposal is original, to the best of my knowledge there is not other general-purpose ontology for explanations.

The new version of the manuscript has improved the reading flow very much.
The authors have expanded several sections and particularly the one on the ontology´s evaluation methods.
I am happy with the changes that addressed my comments. I think this is a very nice engineering tool that helps standardizing explanations for systems.
This tools is mainly for systems designers that can use tools such as Protegé
and can play with KG and ontologies to recover all the needed pieces that particular explanations require.
I see it very difficult to be able to translate this to a development and deployment stage where tools like Protogé
are not adequate for performance and interoperability reasons. This questions the real impact the proposal can have in real-world developing scenarios.

Minor typos and questions:

-Page 7: "EO Modeling Summary and Intended Usage:" --> parenthesis does not close.

-Table 3: Everyday explanations, which knowledge would such an explanation type use?

- Table 4, Scientific explanations: "References the results of rigorous scientific methods, observations, and measurements.
“What studies have backed this recommendation?”
An implementation of this would be such that this explanation would be linked to object records that are for instance scientific papers?

- Page 12: "EO. Each of uses case has a set of example questions for which different explanation methods and/or ML methods are run" -> Each of the use cases has a set...

Review #2
By Ilaria Tiddi submitted on 20/Dec/2022
Suggestion:
Accept
Review Comment:

I am happy with the authors' response and the changes they made to their paper. I recommend for acceptance.