Review Comment:
This article (i) introduces an ontology of AI ethical principles (AIPO) to allow the creation of a dynamic knowledge graph for user defined queries on this subject, (ii) promotes the integration of Ethics as a body of knowledge into the Web of Linked data. This is a timely and quite appropriate research subject, and ontologies can certainly play a key role in ordering, managing, and handling ethical principles for AI.
In this review, I will focus on philosophical grounds and the knowledge acquisition process.
Summary: (1) Quality and relevance of the described ontology: (i) the ontology is relevant, (ii) quality could be improved, (iii) biases should be explicitly addressed, (iv) tests and walkthroughs are insufficient and should be carried out because a partial proof of concept on OECD principles is insufficient to validate the ontology (v) metrics should have been described and provided; (2) Illustration, clarity and readability of the describing paper: (i) the introduction, first section, methodology, and conclusions should be rewritten, (ii) the theoretical assumptions explicitly assessed and explained, including its intentionality and usages, (3) Other dimensions will be put aside in this review. What it can be found at https://github.com/AndrewHarrison/AIPO shows a promising preliminary work. It lacks development, evaluation, and testing. I would have liked to discuss the ethical core-concepts, but I could not find them because the ontology is more the ontology is more document-based than content-based. It is a bit surprising that no ethicist has been involved in the ontology building process. Ethics engineering is an emerging field, and the authors could have benefit from a previous conceptual analysis that is missing in this paper.
The paper contains good ideas and a promising start, but the knowledge acquisition process should be better explained, and the ontology should be validated more extensively. I encourage the authors to pursue their work and to resubmit after completing it. Another suggestion would be submitting and presenting it first at a major SW conference.
Some observations follow:
1. The authors write: “These principle (ethics) sets serve as non-legislative policy instruments also known as soft-law”. p. 1.
Ethics are not a subpart of soft law but stands by its own as a separate field of research, embedded into all regulatory systems. Ethical instruments and soft law instruments (standards, protocols, agreements, recommendations....) are not the same. ‘Soft law’ is a term that originated thirty years ago in International (Customary) and transnational law to refer to agreements, commitments and relationships that are deemed to be horizontal, ‘non-binding’, contrary to the instruments covered by ‘hard law’, related to jurisdictions and vertical power of the nation state. Ethics might (or might not) be infused into both soft and hard law. When ethics are not taken into account by public powers (Parliaments, Courts, Administration), i.e. when there is no good governance, legal and socio-legal scholars use to talk about ‘state law’ or the ‘unrule of law’, as ethics is taken as an essential component of the broad political concept of rule of law (contrary to tyranny and dictatorship as political forms). There is a certain amount of literature on this doctrinal construct, e.g. Shaffer, G.C. and Pollack, M.A., 2009. ‘Hard vs. soft law: Alternatives, complements, and antagonists in international governance’. Minn. L. Rev., 94, p.706.
2. […] “the principles set need to be re-engineered to make them human consumable in volume and format”. What does it mean to ‘consume’ ethical principles? I understand that this is referred to semantics, opposing machine and human processing, i.e. consumable by machines. It’s similar to make rules ‘consumable’, i.e. available and ready to be used. However, at a more basic level, making documents accessible does not mean making their content consumable (by humans). Some more precision would be needed, because there is a long tradition in practical philosophy against identifying Ethics “with one single human concern or with one single set of concepts”. Endorsing Dewey’s perspective on Ethics, Putnam asserted that “the primary aim of the ethicist should not be to produce a ‘system’ but to contribute to the solution of practical problems—as indeed, Aristotle already knew. Although we can often be guided by universal principles (at least they are typically stated as if they were universal and exceptionless) in the solution of practical problems, few real problems can be solved as treating them as mere instances of a universal generalisation, and few practical problems are such that when we have resolved them—and Dewey held that the solution to a problem is always provisional and fallible—we are rarely able to express what we learned in the course of our encounter with a ‘problematic situation’ in the form of a universal generalisation that can be unproblematically applied to other situations.” Ethics without ontology (2004, 5). I could have quoted other practical philosophies coming from a different tradition, but with the same prevention against the ‘consumable’ usability and reusability of ethical principles as such, e.g. Leszek Kolakowski’s Ethics without a moral code (1971). In purity, Ethics cannot be completely ‘known’ but is produced and enhanced through collective agency (human or artificial).
3. “The consumability and use of these AI principles is important to ensure public accessibility and accountability, shared understanding between actors to prevent AI arms races, assistance in the design and deployment of AI systems, and finally to help drive and shape the presumably forthcoming hard-law and regulation of AI.”
There are pêle-mêle aims of different nature here (functional, technical, political, legal…), and this is not helping to elucidate what ‘consumability’ of ethical principles means. In this field, it is worth mentioning that ontology building holds not only a technological but a moral and political dimension as well, and this should be explicitly acknowledged and clarified from the beginning. Several authors have been noticing that the intermediate activity of ontology engineering is an inherently moral activity, i.e. purports moral effects. Cf. Anticoli, L. and Toppano, E., 2013. ‘Technological mediation of ontologies: the need for tools to help designers in materializing ethics’. International Journal of Philosophy Study, 1(3), pp.23-31. I would recommend as well to have a look at the two reports on AI ethical and legal governance carried out by AI4People (Atomium Foundation), ‘AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations’ (https://link.springer.com/article/10.1007/s11023-018-9482-5 ) and ‘AI4 People Report on Good AI Governance’ https://www.eismd.eu/wp-content/uploads/2019/11/AI4Peoples-Report-on-Goo...
4. In their first research question, the authors equate conceptual meaning with ethical knowledge. In the second research question, they link this ‘consumable’ ontology to their use and impact. They assume that extracting ethical concepts from texts and turning them into a readable, ‘consumable’ format will help to make them more effective or at least more suitable for regulatory purposes. This position is not that clear. Information retrieval should be differentiated from regulatory usages. Ethics have been (and still are) embedded into legal texts in many ways, with the aid of specific vocabularies with a huge variability of meanings and philosophical roots (e.g. le bon père de famille in the Napoléon Civil Code, extended to Italian, French, German… Codes in the 19th c.). In the Common law tradition, the implementation of legal schemes is based on constitutional check and balances principles based on proportionality that are not always coded (e.g. UK). Is the ethical ontology also covering these vocabularies and ‘fundamental legal concepts’ expressing ethical stances that pervade all regulatory bodies (including policies and ISOs)? In the article, Ethics is not understood as behaviour or instances for agency but as a set of separated concepts expressing principles and values for the use of AI. This is a limited understanding of the field, and I am afraid that this limitation is reflected into AIPO, the ontology for AI principles. Its philosophical grounds and possible usages and impact should be better specified. This goes back to the knowledge acquisition process used to build up the ontology, showing an expert-driven rather than data-driven methodology.
5. The methodology is divided into three phases: “Using the principle of a life-cycle from software engineering, ontology engineering can be used and broken into the phases of requirements analysis, ontology creation, and ontology assurance”. The authors did not start with competency questions (this would have provided the backbone for the ontology) but directly with the requirements, using primary sources: “The primary source of AI principle documents is not academic articles, but rather documents produced by a range of different actors, then published on websites directly or as PDFs available for download from websites. Hence, we did not use academic databases as the predominant source to retrieve the documents, but rather search engines and news articles, along with principle sets previously collected by the authors during prior research, as well as the principle sets that were referred to in the secondary systematic studies.“ But no further detail, data nor metrics are offered about the performance and iterative cycles of the knowledge acquisition process (which documents, how many, source, metrics applied etc.). They built up the ontology from scratch, which means in practice that they were driven by the analysis of the seven articles on AI principles that they had chosen as representative (summarised in the previous section as ‘related work’). These seven academic papers are also quite different from each other in scope, intention, and assumptions. Valuable as they are, I would not assume that they can provide the grounds for an ontology on ethics and AI. They were written with a different purpose in mind.
6. The authors acknowledge that “ontology mapping and integration was done manually by the authors without the use of automated tools.” Why not? A normal way of working out the production and integration of knowledge in these phases is combining both qualitative and quantitative methods (clustering, probabilistic topic models, latent semantic analysis, etc.). If primary sources are used, the lack of quantitative analyses must be justified. Otherwise, the authors rely on their own reading and experience, and this does not minimise the risk of being biased by their own values (the so-called ‘ideological/or academic’ bias).
7. The last sections of the article, especially its conclusions, does not match what is normally expected from ‘conclusions and further work’. There is no need to convince SWJ readers about the feasibility and benefits of a knowledge graph, or adding information not previously provided. These sections usually summarise the findings (against the advanced hypotheses) and the results of the validation tests.
|