Call for Papers: Special Issue on The Role of Ontologies and Knowledge in Explainable AI
Call for papers: Special Issue on
The Role of Ontologies and Knowledge in Explainable AI
Explainable AI (XAI) has been identified as a key factor for developing trustworthy AI systems. The reasons for equipping intelligent systems with explanation capabilities are not limited to user rights and acceptance. Explainability is also needed for designers and developers to enhance system robustness and enable diagnostics to prevent bias, unfairness, and discrimination, as well as to increase trust by all users in why and how decisions are made.
The interpretability of AI systems has been described long time ago since mid 1980s, but until recently it becomes an active research focus in computer science community due to the advances of big data and various regulations of data protection in developing AI systems, such as the GDPR. For example, according to the GDPR, citizens have the legal right to an explanation of decisions made by algorithms that may affect them (e.g., see Article 22). This policy highlights the pressing importance of transparency and interpretability in algorithm design.
XAI focuses on developing new approaches for explanations of black-box models by achieving good explainability without sacrificing system performance. One typical approach is the extraction of local and global post-hoc explanations. Other approaches are based on hybrid or neuro-symbolic systems, advocating a tight integration between symbolic and non-symbolic knowledge, e.g., by combining symbolic and statistical methods of reasoning.
The construction of hybrid systems is widely seen as one of the grand challenges facing AI today. However, there is no consensus regarding how to achieve this, with proposed techniques in the literature ranging from knowledge extraction and tensor logic to inductive logic programming and other approaches. Knowledge representation---in its many incarnations---is a key asset to enact hybrid systems, and it can pave the way towards the creation of transparent and human-understandable intelligent systems.
This special issue will feature contributions dedicated to the role played by knowledge bases, ontologies, and knowledge graphs in Explainable Artificial Intelligence (AI), in particular with regard to building trustworthy and explainable decision support systems. Knowledge representation plays a key role in Explainable AI (XAI). Linking explanations to structured knowledge, for instance in the form of ontologies, brings multiple advantages. It does not only enrich explanations (or the elements therein) with semantic information---thus facilitating evaluation and effective knowledge transmission to users---but it also creates a potential for supporting the customisation of the levels of specificity and generality of explanations to specific user profiles or audiences. However, linking explanations, structured knowledge, and sub-symbolic/statistical approaches raise a multitude of technical challenges from the reasoning perspective, both in terms of scalability and in terms of incorporating non-classical reasoning approaches, such as defeasibility, methods from argumentation, or counterfactuals, to name just a few.
Topics relevant to this special issue include, but are not limited to, the following:
- Representing and Storing Web of Data
- Cognitive computational systems integrating machine learning and automated reasoning
- Knowledge representation and reasoning in machine learning and deep learning
- Knowledge extraction and distillation from neural and statistical learning models
- Representation and refinement of symbolic knowledge by artificial neural networks
- Explanation formats exploiting domain knowledge
- Visual exploratory tools of semantic explanations
- Knowledge representation for human-centric explanations
- Usability and acceptance of knowledge-enhanced semantic explanations
- Evaluation of transparency and interpretability of AI Systems
- Applications of ontologies for explainability and trustworthiness in specific domains
- Factual and counterfactual explanations
- Causal thinking, reasoning and modeling
- Cognitive science and XAI
- Open source software for XAI
- XAI applications in finance, medical and health sciences, etc.
Deadline
- Submission deadline: 15 February 2022 (extended!). Papers submitted before the deadline will be reviewed upon receipt.
Author Guidelines
Submissions shall be made through the Semantic Web journal website at http://www.semantic-web-journal.net. Prospective authors must take notice of the submission guidelines posted at http://www.semantic-web-journal.net/authors.
We welcome four main types of submissions: (i) full research papers, (ii) reports on tools and systems, (iii) application reports, and (iv) survey articles. The description of the submission types is posted at http://www.semantic-web-journal.net/authors#types. While there is no upper limit, paper length must be justified by content.
Note that you need to request an account on the website for submitting a paper. Please indicate in the cover letter that it is for the "The Role of Ontologies and Knowledge in Explainable AI" special issue. All manuscripts will be reviewed based on the SWJ open and transparent review policy and will be made available online during the review process.
Also note that the Semantic Web journal is open access.
Finally please note that submissions must comply with the journal’s Open Science Data requirements, which are detailed in the corresponding blog post.
Guest editors
The guest editors can be reached at ontologies-knowledge-in-xai-swj@googlegroups.com .
Roberto Confalonieri, Free University of Bozen-Bolzano, Faculty of Computer Science, Italy
Oliver Kutz, Free University of Bozen-Bolzano, Faculty of Computer Science, Italy
Diego Calvanese, Department of Computing Science, Umeå University, Sweden and Free University of Bozen-Bolzano, Faculty of Computer Science
Jose M. Alonso, University of Santiago de Compostela, CiTIUS, Spain
Shang-Ming Zhou, University of Plymouth, Faculty of Health, UK
Guest Editorial Board
Alberto J. Bugarín Diz, University of Santiago de Compostela, CiTIUS, Spain
Franz Baader, Technische Universität Dresden - Fakultät Informatik, Institut für Theoretische Informatik
Bart Bogaerts, Department of Computer Science, Vrije Universiteit Brussel, Belgium
Loris Bozzato, Data and Knowledge Management Research Unit (DKM), Fondazione Bruno Kessler, Italy
Giovanna Castellano, University of Bari, Italy
Oscar Cordón, University of Granada, Spain
Ivan Donadello, Faculty of Computer Science, Free University of Bozen-Bolzano, Italy
Pietro Ducange, University of Pisa, Italy
Shaker El-Sappagh, University of Santiago de Compostela, CiTIUS, Spain
Janna Hastings, University College London
Andreas Holzinger, Institute for Medical Informatics / Statistics, Medical University Graz, Austria
Uzay Kaymak, Eindhoven University of Technology, The Netherlands
Vladik Kreinovich, University of Texas at El Paso, U.S.A.
Till Mossakowski, Faculty of Computer Science, University of Magdeburg, Germany
Witold Pedrycz, University of Alberta, Canada
Rafael Peñaloza, Dip. di Informatica, Sistemistica e Comunicazione, Università degli Studi di Milano-Bicocca, Italy
Edy Portmann, Human-IST Institute, Switzerland
Marek Reformat, Faculty of Engineering - Electrical & Computer Engineering Dept, University of Alberta
Tajul Rosli Razak, Universiti Teknologi MARA, Malaysia
Clemente Rubio-Manzano, University of Bio-Bio, Chile
Daniel Sánchez, University of Granada, Spain
Stefan Schlobach, Faculty of Sciences, Vrije Universiteit Amsterdam, The Netherlands
Carles Sierra, Artificial Intelligence Research Institute (IIIA-CSIC), Spain
Jose Manuel Soto-Hidalgo, University of Cordoba, Spain
Luis Terán, Human-IST Institute, Switzerland
Nicolas Troquard, Free University of Bozen-Bolzano, Faculty of Computer Science
Anna Wilbik, Maastricht University, The Netherlands
- Pascal Hitzler's blog
- Log in or register to post comments
- 16903 reads