Review Comment:
This speculative paper explores an interesting supposition: that humans and some robots – i.e. those self-taught and not trained by humans - will likely have incompatible ontologies. This is because of (1) different hardware (i.e. computational vs biological), (2) different experience of the environment due to different sensors, and (3) different understandings of the experience due to diverse processing capabilities. Altogether, this will make it difficult for humans and such robots to interact effectively. The problem is not one of mapping knowledge structures, which is well-explored, but a question of the nature of the ontology itself as hosted by the robot. Radical differences would then require new framings to enable mappings and fruitful interactions. This further requires exploration of questions not only about the nature of such ontologies, and how to elicit them from such robots, but also about the nature of human-robot interactions and the social systems in which they can co-exist commensurably.
The paper is well-written, and raises timely, interesting, and important questions, especially in light of the rapid evolution of robotic agents and their use of ontologies. It is therefore suitable for publication in the journal and its anniversary special issue.
My main comment about the substance of the paper concerns the initial two examples of ontology difference. The first seems to be a partition of existing top-level ontologies (focussed on endurants/continuants), and the second is a possible extension (as noted in the paper). These examples therefore do not seem incompatible with existing top-level ontologies, and lead me to think it might be useful to cast the differences on a spectrum from minor to major. It might also be worth mentioning that such differences also imply incompatibilities between robot ontologies, which will affect how robots interact with each other.
As a minor comment, the following requires more explanation (p5, l50-51, left): “… we would need to fix a non-influenced behavior to use as a basis for evaluating actual behaviors”.
Apart from this, below are some minor optional language changes to consider, shown in square brackets:
1. p1, title: consider “Ontological challenges [in] Cohabit[ing] with Self-taught Robots” or “Ontological challenges to Cohabit[ation] with Self-taught Robots”
2. p1, l17, abstract: might sound odd[,] but is a case
3. p1, l18, abstract: In this paper[,] self-taught robots
4. p1, l34, left: At that time[,] the information
5. p1, l43, left: has led [who?] to imagine general purpose agents
6. p1, l45, left, please reword: This means to imagine…
7. p1, l31, right, match tense: behaviors [became] clear
8. p2, l12-14, left, suggest rewording to: What pushed ontologists to believe in the possibility of information integration…
9. p2, l27, left: In other areas[,] the
10. p2, l9, right: different understanding[s]
11. p2, l42-43, right: an impact on human social [behavior], and in particular on conventions [7], [that] at the moment…
12. p2, l51, right: and this pushes [one] to imagine a
13. p3, l2-3, left: between humans and robot[s],
14. p3, l8, left: conditions, [and] adapting
15. p3, l26, left: the building to empty[,] etc.
16. p3, l43-45, left, consider [behavior] and instead of “behaviors” throughout the paper
17. p3, l10, right, same as above, [hardware] instead of hardwares
18. p3, l42, right: in unique ways[,] but since
19. p4, l4, left: exchange, collaboration[,] or
20. p4, l39-40, left: relevant in [human] TLOs, for instance[,] properties
21. p4, l51, left: the DOLCE ontology [11] [could be extended] to include
22. p4, l26, right: as a TLO in today[‘]s terms
23. p4, l35, right: develop TLOs suitable [for] self-robot[s], perhaps
24. p4, l38-39, right: that [enable understanding of] different ALOs
25. p4, l49-50, right: ontological issues we [are likely to] face
26. p5, l14-15, left: humans and self-robot[s]
27. p5, l17-18, left: for instance [as] it does so for roboticists
28. p5, l23-24, left: seems to me that [such] purposefulness and reciprocity
29. p5, l30, right: comprise humans and [self]-robots
30. p5, l38-39, right: we can [imagine] a classification
31. p5, l42-43, right: we can also [imagine] social systems
32. p6, l2, left: and [self]-robots are
33. p6, l16, left: human consent or [not]
34. p6, l19, left: [self-robots]
|