Ontological challenges to Cohabitation with Self-taught Robots

Tracking #: 2261-3474

Authors: 
Stefano Borgo

Responsible editor: 
Guest Editor 10-years SWJ

Submission type: 
Other
Abstract: 
When you meet a delivery robot in a narrow street it stops to let you pass. It was built to give you precedence. What happens if you run into a robot that was not trained by or for humans? The existence in our environment of robots which do not abide by human behavioral rules and social systems might sound odd but is a case we may encounter in the near future. In this paper self-taught robots are artificial embodied agents that, thanks for instance to AI learning techniques, manage to survive in the environment without embracing behavioral or judgment rules given and used by humans. The paper argues that our ontological systems are not suitable to cope with artificial agents. The arguments are speculative rather than empirical, and the goal is to drive attention to new ontological challenges.
Full PDF Version: 
Tags: 
Reviewed

Decision/Status: 
Accept

Solicited Reviews:
Click to Expand/Collapse
Review #1
By Christoph Schlieder submitted on 23/Aug/2019
Suggestion:
Minor Revision
Review Comment:

This is a well-written analysis of some of the ontological challenges that would arise if humans could interact with robots that have not been designed and trained by humans. The scenario of self-taught robots is speculative. Stefano Borgo does not claim that such robots exist but argues that they will exist in the (near) future. The article fits well into the range of topics envisaged for the 10-years SWJ issue. It contributes to the discussion on directions of future research, although the author addresses an anticipated, not an existing problem.

The analysis starts with observing that humans and self-taught robots may have "incompatible views of reality". Stefano Borgo moves on to convince the reader that this is not just a possibility but a likely situation. His argument exploits the fact that robots do not have human embodiments. I am tempted to rephrase this in machine learning terminology. A human and a robot sharing the same set of training data would still differ in the learning mechanism and therefore process the training data with a different inductive bias. This is not an entirely new argument. However, the article explores the consequences for the acquisition of top-level ontologies and, with the notions of scenario and agent-level ontologies, gives hints how to describe the consequences of embodiment.

While the article is ready for publication in its present form, I see a few places where it could be improved by adding clarifications.

Section 1: Inter-human understanding is depicted rather optimistically, probably a result of the firm realist stance widely adopted in ontological modeling: "it is assumed that humans can understand each other", they can "switch from one ontological system to another as needed". On the other hand, the author acknowledges, that humans (applied ontologists included) seem to have difficulties to agree on a foundational ontology. I miss an explanation why this is "a different issue".

Section 2: Self-taught robots are defined as robots that are trained "without humans having the capability or even the opportunity to control ... their evolution". It is not entirely clear what this includes. Is the adoption of a particular neural network architecture, say, a specific CNN, considered a control of the evolution? Shouldn't one require that self-robots are not just self-taught but also self-engineered? To move the scenario even more towards science fiction: why couldn't this include the possibility to self-design types of sensors that have not yet been invented by human engineers?

Section 3: Do the agent-level ontologies imply a change of stance from realism towards some kind of robot conceptualism? Do ALOs differ from the characterization given earlier in the text (section 1) of Semantic Web ontologies that describe "views of circumscribed interest, not general claims about how reality is"?

Review #2
By Boyan Brodaric submitted on 09/Sep/2019
Suggestion:
Minor Revision
Review Comment:

This speculative paper explores an interesting supposition: that humans and some robots – i.e. those self-taught and not trained by humans - will likely have incompatible ontologies. This is because of (1) different hardware (i.e. computational vs biological), (2) different experience of the environment due to different sensors, and (3) different understandings of the experience due to diverse processing capabilities. Altogether, this will make it difficult for humans and such robots to interact effectively. The problem is not one of mapping knowledge structures, which is well-explored, but a question of the nature of the ontology itself as hosted by the robot. Radical differences would then require new framings to enable mappings and fruitful interactions. This further requires exploration of questions not only about the nature of such ontologies, and how to elicit them from such robots, but also about the nature of human-robot interactions and the social systems in which they can co-exist commensurably.

The paper is well-written, and raises timely, interesting, and important questions, especially in light of the rapid evolution of robotic agents and their use of ontologies. It is therefore suitable for publication in the journal and its anniversary special issue.

My main comment about the substance of the paper concerns the initial two examples of ontology difference. The first seems to be a partition of existing top-level ontologies (focussed on endurants/continuants), and the second is a possible extension (as noted in the paper). These examples therefore do not seem incompatible with existing top-level ontologies, and lead me to think it might be useful to cast the differences on a spectrum from minor to major. It might also be worth mentioning that such differences also imply incompatibilities between robot ontologies, which will affect how robots interact with each other.

As a minor comment, the following requires more explanation (p5, l50-51, left): “… we would need to fix a non-influenced behavior to use as a basis for evaluating actual behaviors”.

Apart from this, below are some minor optional language changes to consider, shown in square brackets:

1. p1, title: consider “Ontological challenges [in] Cohabit[ing] with Self-taught Robots” or “Ontological challenges to Cohabit[ation] with Self-taught Robots”
2. p1, l17, abstract: might sound odd[,] but is a case
3. p1, l18, abstract: In this paper[,] self-taught robots
4. p1, l34, left: At that time[,] the information
5. p1, l43, left: has led [who?] to imagine general purpose agents
6. p1, l45, left, please reword: This means to imagine…
7. p1, l31, right, match tense: behaviors [became] clear
8. p2, l12-14, left, suggest rewording to: What pushed ontologists to believe in the possibility of information integration…
9. p2, l27, left: In other areas[,] the
10. p2, l9, right: different understanding[s]
11. p2, l42-43, right: an impact on human social [behavior], and in particular on conventions [7], [that] at the moment…
12. p2, l51, right: and this pushes [one] to imagine a
13. p3, l2-3, left: between humans and robot[s],
14. p3, l8, left: conditions, [and] adapting
15. p3, l26, left: the building to empty[,] etc.
16. p3, l43-45, left, consider [behavior] and instead of “behaviors” throughout the paper
17. p3, l10, right, same as above, [hardware] instead of hardwares
18. p3, l42, right: in unique ways[,] but since
19. p4, l4, left: exchange, collaboration[,] or
20. p4, l39-40, left: relevant in [human] TLOs, for instance[,] properties
21. p4, l51, left: the DOLCE ontology [11] [could be extended] to include
22. p4, l26, right: as a TLO in today[‘]s terms
23. p4, l35, right: develop TLOs suitable [for] self-robot[s], perhaps
24. p4, l38-39, right: that [enable understanding of] different ALOs
25. p4, l49-50, right: ontological issues we [are likely to] face
26. p5, l14-15, left: humans and self-robot[s]
27. p5, l17-18, left: for instance [as] it does so for roboticists
28. p5, l23-24, left: seems to me that [such] purposefulness and reciprocity
29. p5, l30, right: comprise humans and [self]-robots
30. p5, l38-39, right: we can [imagine] a classification
31. p5, l42-43, right: we can also [imagine] social systems
32. p6, l2, left: and [self]-robots are
33. p6, l16, left: human consent or [not]
34. p6, l19, left: [self-robots]