Review Comment:
The current study makes use of ontologies for ensuring the accountability of Machine Learning systems. They show the feasibility of the proposed approach in a real-world scenario.
*Section (Abstract)*
The abstract of the article is very generic and does not give a precise overview of the proposed approach and the contribution of the paper.
*Section (Introduction)*
The authors say that “Even though the maturity of the Artificial Intelligence (AI) technologies is rather advanced nowadays, according to McKinsey1, its adoption, deployment and application is not as wide as it could be expected”
I would like to have a little bit more concrete view of that with examples. This is not a problem of space to write more details here.
Authors write this:
“According to [8], during the past decade, there has been an increase in AI systems based on black-box models, that is, models that hide their internal logic to the user.”
Maybe the authors could be a bit more precise here. Also, in this study the models used are very basic, how is explainability a problem in the specific case of the author? The authors should convince the reader about the problem first and then state their contribution.
The authors in the introduction say that the proposed methods so far are post-hoc, is it good or not, it is not very much clarified since the proposed approach itself performs some post-hoc explanations (even in that case the explanations are not generated).
*Section (Related Work)*
The related work is not complete w.r.t. the explainability aspect in Knowledge Graphs. For example, cite:
Ilaria Tiddi, Freddy Lécué, Pascal Hitzler: Knowledge Graphs for eXplainable Artificial Intelligence: Foundations, Applications, and Challenges. Studies on the Semantic Web 47, IOS Press 2020
Freddy Lécué: On the role of knowledge graphs in explainable AI. Semantic Web, 2020
*Section 3 (On the Role of Ontologies)*
These are the basic concepts of ontologies, ontology development, and reuse. It is very much unclear to me why these concepts are discussed here in the Journal which is the core of the Semantic Web community. The authors should motivate why it is being discussed here.
*Section 4 (Making Machine Learning Accountable)*
The framework given in Fig. 1 is the general framework. What are the added value and the novelty of this framework and the corresponding explanation of the framework is not very clear to me.
However, the interesting explanation starts in section 4.2.1 where the authors are concretely introducing the existing ontologies for the use-case described later on.
The actual contribution of the authors starts on page 8 section 5.
- First of all the authors should really define concretely what does it mean by accountability in their scenario?
- How do they plan on achieving a trustworthy system?
- What are the contributions of their approach?
My overall impression of the proposed approach is that there is data for a particular scenario. The authors are using KNN for making some predictions (the explanation is mostly from an implementation point of view). The output is then fed to the existing EEPS ontology which is then queryable. I am still wondering how the goal discussed at the beginning of the study is met here. I am not very clear about the contribution of the approach as well as the novelty.
|