Engineering User-centered Explanations to Query Answers in Ontology-driven Socio-technical Systems

Tracking #: 3053-4267

Authors: 
Carlos Teze
Jose N. Paredes
Maria Vanina Martinez
Gerardo Simari

Responsible editor: 
Guest Editors Ontologies in XAI

Submission type: 
Full Paper
Abstract: 
The role of explanations in intelligent systems has in the last few years entered the spotlight as AI-based solutions appear in an ever-growing set of applications. Though data-driven (or machine learning) techniques are often used as examples of how opaque (also called black box) approaches can lead to problems such as bias and general lack of explainability and interpretability, in reality these features are difficult to tame in general, even for approaches that are based on tools typically considered to be more amenable, like knowledge-based formalisms. In this paper, we continue a line of research and development towards building tools that facilitate the implementation of intelligent socio-technical systems, focusing on features that users can leverage to build explanations to their queries. In particular, we propose user-centered mechanisms for building explanations based both on the kinds of explanations required (such as counterfactual, contextual, etc.) and the inputs used for building them (such as the ABox, TBox, and lower-level data-driven modules). In order to validate our approach, we develop a use case in the domain of hate speech in social platforms, using a recently-proposed application framework, and make available the source code for this tool.
Full PDF Version: 
Tags: 
Reviewed

Decision/Status: 
Major Revision

Solicited Reviews:
Click to Expand/Collapse
Review #1
Anonymous submitted on 06/Apr/2022
Suggestion:
Major Revision
Review Comment:

(1) originality: This paper is lacking in describing how it compares to other related work, especially those of Chari et al., who proposed an explanation ontology: https://link.springer.com/chapter/10.1007/978-3-030-62466-8_15 that models user-centered explanation types, that the authors' design style templates for.
 (2) significance of the results: The results are comprehensive in an application suited for the cybersecurity domain. However, beyond the cybersecurity domain, I am not sure how the explanation templates would be able to represent explanations in other domains more generally. I think the authors should provide results in other domains to demonstrate the general-purpose capabilities of their tool.
(3) quality of writing: Overall, the manuscript is easy to understand, but the writing can be improved to reduce redundancy of content and writing style ( for example, by eliminating hyphens ).

This paper is yet another step toward an important contribution of providing the ability to allow users to populate user-centered explanation types from different sources, including knowledge base content and database knowledge. While this ability to populate explanations is not new and has already been covered earlier in the explanation ontology, the authors take a different approach in terms of formalism and define logical rules/style templates in Datalog to populate seven user-centered explanation types. I think the authors could include justification on why an ontology-based approach doesn't work well for their use case and why they choose Datalog instead.

They have also developed methods to combine explanation types based on findings from a user study. However, the results from the user study are not described beyond the content in Fig. 5. It would be helpful if the authors could provide more insight into how they chose three explanation types as fundamental ones to root the combinations. 

The abstract and contributions mention that an important part of this work is the application of the explanation styles to the cybersecurity domain. However, the cybersecurity section of the paper is quite slim. Also, I cannot tell how the explanation styles are combined with content from the HEIST application framework. Hence, I request that the authors provide more examples and use cases to demonstrate the utility of their explanation tool. The use case section of the paper can be placed right after the technical descriptions so that the readers can apply their understanding of the tool to a working example. 

I also find that the paper mentions related work peppered throughout. It can often get confusing to understand how the related work applies to the contributions in the paper. Instead, I suggest that the authors reserve most of the related work descriptions for the background and associated sections. On a similar line, I think the introduction has a lot of related work, reducing the emphasis on the main takeaways or need for the work. I suggest that the authors spend some time reorganizing the introduction to convey why explanations are needed in general in the cybersecurity domain, what kinds of applications in the cybersecurity domain demand explanations, briefly state why existing techniques don't work, and then move on to describing contributions. Also, from my understanding of this paper, the method developed is mainly for the cybersecurity domain. If that is the case, the authors should clarify that this is a domain-specific implementation in the contributions. If not, the authors should spend some time in a discussion section on how they expect readers to apply this framework outside of the cybersecurity domain.

Finally, I glanced through the code on the HEIST Github: https://github.com/jnparedes/HEIST. All the code files described in the paper seem to be present; however, due to a lack of a guide on how to run them, I am not sure how I can test the implementation. I request that the authors include details on this in their README file.

In addition to my high-level review comments, I also include a few remarks on specific sections of the paper below.

Comments:
- In Fig. 4, instead table 4, the FO column is not referred to in the text before it is mentioned here. Also, it is confusing to have to go back to understand what the columns mean. Consider expanding the abbreviations used in the columns in the caption.
- The findings from the study that yielded results on the valid combination styles for explanation types are not described well enough. It is hard to understand the patterns that the study participants were interested in from the results in Fig. 5 / Tab. 5. 
- The related work section needs reorganization. There is a mention of ontologies in the data and hybrid explanations sections and not in the knowledge-based explanations section, where they belong better. Also, in the same section, there is a mention of the term network knowledge-based explanations that have not been introduced earlier. I think the definition of this term belongs in the introduction.

Review #2
Anonymous submitted on 23/May/2022
Suggestion:
Major Revision
Review Comment:

The paper describes a user-centered mechanisms for building explanations based both on the kinds of explanations required and the inputs used for building them.
The aim of the authors is to realize a transparent by-design intelligent socio-technical system.
The authors then developed a use case on hate speech to demonstrate the viability of their strategy.
The source code of the tool has been made available.

The content available in this version of the manuscript is in a very good shape.
The related work is appropriate and almost complete: I would suggest the authors to check the proceedings of the last AAAI edition for the last advances on XAI.
The presentation of the tool is clear as well as how each single module works.
The use case is useful to demonstrate the suitability of the proposed solution.

I have only one big concern that is related to the evaluation of what has been presented in the paper.
My feeling is that there are two kind of evaluations that should be included in order to validate the tool and its effectiveness in a complete way.

First, the tool is supposed to be the result of a methodology that has been implemented. Hence, an evaluation of such a methodology should be provided by experts in XAI field.
For example, there should be some research questions that would be answered, like: "Is the explanation generation process compliant with trustworthy principles?"
I would invite the authors to check the following works:
- María Poveda-Villalón, Alba Fernández-Izquierdo, Mariano Fernández-López, Raúl García-Castro: LOT: An industrial oriented ontology engineering framework. Eng. Appl. Artif. Intell. 111: 104755 (2022)
to check how to evaluate the tool from an engineering perspective
- Andreas Holzinger, André M. Carrington, Heimo Müller: Measuring the Quality of Explanations: The System Causability Scale (SCS). Künstliche Intell. 34(2): 193-198 (2020)
to check how to evaluate the quality of the explanations.

Second, it is as expected that each explanation should have a certain impact on users' behavior.
It was not completely clear to me if this work is the case, but, such an impact should be evaluated as well by observing the behaviors of users based on the explanations they receive (if any).
The authors may find an example, on a different domain but with the same purpose of validating explanations impact, in this work:
- Mauro Dragoni, Ivan Donadello, Claudio Eccher: Explainable AI meets persuasiveness: Translating reasoning results into behavioral change advice. Artif. Intell. Medicine 105: 101840 (2020)