On The Role of Knowledge Graphs in Explainable AI

Tracking #: 2198-3411

Authors: 
Freddy Lecue

Responsible editor: 
Guest Editor 10-years SWJ

Submission type: 
Other
Abstract: 
The current hype of Artificial Intelligence (AI) mostly refers to the success of machine learning and its sub-domain of deep learning. However AI is also about other areas such as knowledge representation and reasoning, or distributed AI i.e., areas that need to be combined to reach the level of intelligence initially envisioned in the 1950s. Explainable AI (XAI) is now referring to the core backup for industry to apply AI in products at scale, particularly for industries operating with critical systems. This paper reviews XAI not only from a Machine Learning perspective, but also from the other AI research areas such as AI Planning or Constraint Satisfaction and Search. We expose the XAI challenges of AI fields, their existing approaches, limitations and opportunities for knowledge graphs and their underlying technologies.
Full PDF Version: 
Tags: 
Reviewed

Decision/Status: 
Minor Revision

Solicited Reviews:
Click to Expand/Collapse
Review #1
By Dagmar Gromann submitted on 28/Jun/2019
Suggestion:
Accept
Review Comment:

Thanks for this very interesting paper. The discussion of explainable AI with such a broad variety of subfield instead of the popular trend to focus on NN explainability is a pleasure to read, especially since each field is well presented and discussed, which is quite an achievement given space constraints.

One potential addition in the field of traditional ML could be the consideration of distant supervision and semi-supervised learning, which are equally important as supervised, unsupervised, and reinforcement learning. In the same field, I would also rather talk about random forrests than decision forrests since this is the more conventional naming for the related algorithms (e.g. "decision trees or forrests). Or maybe there is some class of algorithms I am not aware of. Several of the opportunities in this section were the reasons for introducing knowledge graph embeddings and Graph Convolutional Networks. Even though this section explicitly excludes NN approaches, maybe it would be worth adding Knowledge graph encodings/representations other than/alternative to KG embeddings? Maybe one further opportunity to ML could be the injection of Knowledge Graph representations into to the actual training process, as it is done increasingly for NN approaches. Maybe this could also be an option for traditional ML?

For NN-based KG approaches, it might be worth including recent neural-symbolic reasoning endeavors, such as Makni, B., & Hendler, J. (forthcoming). Deep Learning for Noise-tolerant RDFS Reasoning. Special Issue of the Semantic Web Journal on Semantic Deep Learning, which represent KGs as "graph words". It might also be an idea to reference Pascal's vision paper "Neural-Symbolic Integration and the Semantic Web".

For computer vision, the same special issue also features a paper on combining detection with reasoning in a loop - maybe interesting for "Approaches"? Alirezaie, M., Längkvist, M., Sioutis, M., & Loutfi, A. (2019). Semantic Referee: A Neural-Symbolic Framework for Enhancing Geospatial Semantic Segmentation. Special Issue of the Semantic Web Journal on Semantic Deep Learning. The role of the footnote here is not entirely clear to me - is this where those ideas are taken from?

Robotics: potentially interesting recent approaches (the first actually address the named limnitation):
Pomarlan, M., Porzel, R., Bateman, J., & Malaka, R. (2018, November). From sensors to sense: Integrated heterogeneous ontologies for Natural Language Generation. In Proceedings of the Workshop on NLG for Human–Robot Interaction (pp. 17-21).
Siddharth Patki, Andrea F Daniele, Matthew R Walter, and Thomas M Howard. Inferring compact representations for efficient natural language understanding of robot instructions. arXiv preprint arXiv:1903.09243, 2019.

For multi-agent system approaches, formalizations of agent interactions by Marco Schorlemmer et al. (e.g. reference below) might be interesting:
Chocron, P., & Schorlemmer, M. (2018, July). Inferring commitment semantics in multi-agent interactions. In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems (pp. 1150-1158). International Foundation for Autonomous Agents and Multiagent Systems.

The NN NLP based opportunities are large the same as the NN opportunities, since NN archiectures are the predominant architectures in NLP currently. A tree like structure has been introduced to NLP tasks in form of dependency trees:
Socher, R., Lin, C. C., Manning, C., & Ng, A. Y. (2011). Parsing natural scenes and natural language with recursive neural networks. In Proceedings of the 28th international conference on machine learning (ICML-11) (pp. 129-136).

Also less towards explainability and more towards NLP+KG, but potentially interesting:
Petrucci, G., Rospocher, M., & Ghidini, C. (2018). Expressive ontology learning as neural machine translation. Journal of Web Semantics, 52, 66-82.

Some recommendations for cross-references within the special issue:
- Neural-Symbolic Integration and the Semantic Web http://www.semantic-web-journal.net/content/neural-symbolic-integration-...
- Leveraging Knowledge Graphs for Big Data Integration http://www.semantic-web-journal.net/content/leveraging-knowledge-graphs-...
- Machine Learning for the Semantic Web: Lessons Learnt and Next Research Directions http://www.semantic-web-journal.net/content/machine-learning-semantic-we...
- Neural Language Models for the Multilingual, Transcultural, and Multimodal Semantic Web http://www.semantic-web-journal.net/content/neural-language-models-multi...

Minor comments - general:
- Capitalization of research fields and applications should be made consistent. For instance, Constraint satistifaction and Search, Constraint Satisfaction and Search, constraint satisfaction and search. Additionally, Semantic Web as a term is generally always camel-cased, as is Artificial Intelligence.
- quotation marks are inconsistent, in Latex the easiest is ``and''

Minor comments - in order of appearance:
1.22 other areas such as => other areas, such as
1.23 distributed AI i.e. => distributed AI, i.e., (no comma before i.e. in several places)
1.24 is now referring => now refers
1.39 is addressing intelligence => and addresses intelligence
1.45 of the AI => of AI
1.48 general artificial intelligence i.e., => general artificial intelligence, i.e.,
1.34 more applicability dimension => more applied dimension
2.2 features attribution => feature attribution
2.32 from which emerged ... systems => from which ... systems emerged
2.35 most successful than others => more
2.40 a few work => works
2.41 semantic Web => Semantic Web (repeatedly)
2.41 linked data => Linked Data
2.12 are you sure you mean arbitral limits? what would that be for a taxonomy?
2.17 Neural Netwok => Networks
2.30 a mathematical models => omit a
2.37 explicit their rational => rationale
2.46 approaches limits => limit
2.50 features importance => feature
3.30 better representation of data => representations; an ML model
3.44 with high number => with a high number
3.46 which fit better => which better fit images and texts
3.48 are strong focus => are a strong focus
3.26 [20] such as => [20], such as
3.37 architectures needs => need
3.39 aims => aim
3.46 Figure ??
3.47 a central roles => role
4.30 is relying => relies
4.32 reconstruction, visual => reconstruction to visual
4.38 referred as => referred to as
4.42 many variant => variants
4.46 salience map => maps
4.51 However integrating => However, integrating
5.6 a NP => an NP
5.10 [26], [27] => [26, 27]
5.2 randomization feature values coalition => ? randominzation of?
5.4 As recently explored structured => explored, structured
5.33 [34] Some => ?? omit [34]?
5.44 remains => remain
6.19 rational => rationale ?
6.42 artificial intelligence => Artificial Intelligence
7.12 as rational is usually is => omit is and rationale?
7.17 Knowledge graph => graphs
7.4 Semantics could support for representation purpose => ?
7.26 depending of => on

Review #2
Anonymous submitted on 01/Jul/2019
Suggestion:
Reject
Review Comment:

This paper provides a broad overview of the topic of explainable AI across AI subfields such as machine learning, planning, search, game theory, robotics, distributed AI, computer vision, knowledge representation, uncertainty, and natural language processing (enumerating the AAAI taxonomy). The authors did a good job in identifying challenges, approaches, limitations. However, the roles of knowledge graphs in such subfields, a central topic of this paper, is not convincingly established or explained. In fact, the paper did not even provide a clear description of what knowledge graph the authors are referring to in this paper. Another weakness of the paper is that it attempted to cover too broad a set of areas in the suggested page limit. The paper is already 9 pages long (vs 5 pages) and still cannot cover any specific topic in a subfield mentioned above in a comprehensive manner. Due to such reasons, even though the framing and structure of the paper is sound and could potentially be very valuable, the paper is more suited for a much longer survey (or monograph) given the broad and comprehensive array of topics it attempts to cover.

Review #3
By Guilin Qi submitted on 02/Jul/2019
Suggestion:
Major Revision
Review Comment:

In this paper, the authors considered a very hot topic, explainable AI (or XAI). They classified the methods of XAI by using major AI fields specified in AAAI taxonomy for research fields. This work is interesting and timely. The paper is easy to read, although there exist some typos. However, there are some major issues that should be addressed to ensure publication of the paper.

First, section 2 talks about knowledge graph (KG) for XAI methods. However, there are few discussions on how KG is used to enhance the explainability of existing methods. For example, in section 2.3, "Opportunity", only some questions are asked, but no clues about how KG can be used to enhance explainability of CV methods, and in section 2.4, it seems that KG is only used for narrowing the search space.

Second, the key parts of each subsection in section 2 are the part on "Opportunity". However, in some subsections, the discussion is very brief. For example, in section 2.6, there is only one sentence. This is clearly not desirable. More insights should be given in this part. Furthermore, important references should be given to illustrate the importance of KG for XAI in the part on "Opportunity".

Third, the authors only consider a shallow interpretation of KG, i.e., it is a graph. However, it is well known that KG is not merely a graph, it contains many technologies, such as ontology techniques and information extraction techniques. How these techniques can benefit XAI should be discussed as well.

Fourth, some of the discussion are too superficial. For example, section 2.10 is about natural language processing, which contains many tasks, such as relation extraction and event extraction. Recently, KG has been widely used to enhance these tasks to improve the explainability of the methods for these tasks, which are mostly based on deep learning models. More discussions should be added for these kinds of work.

Some minor issues:

1. page 2, right column, line 46, limits-->limit.
2. page 3, right column, line 39, aims-->aim. Line 46, there is a "?" after "Figure".
3. page 4, left column, line 46, expose-->exposes. Right column, line 29, "semantics" is a very broad word, and does not necessary mean KG.
4. page 5, right column, line 8, what is the relationship between LS-tree and KG?
5. page 6, line 37, to-->too