Review Comment:
Thanks for this very interesting paper. The discussion of explainable AI with such a broad variety of subfield instead of the popular trend to focus on NN explainability is a pleasure to read, especially since each field is well presented and discussed, which is quite an achievement given space constraints.
One potential addition in the field of traditional ML could be the consideration of distant supervision and semi-supervised learning, which are equally important as supervised, unsupervised, and reinforcement learning. In the same field, I would also rather talk about random forrests than decision forrests since this is the more conventional naming for the related algorithms (e.g. "decision trees or forrests). Or maybe there is some class of algorithms I am not aware of. Several of the opportunities in this section were the reasons for introducing knowledge graph embeddings and Graph Convolutional Networks. Even though this section explicitly excludes NN approaches, maybe it would be worth adding Knowledge graph encodings/representations other than/alternative to KG embeddings? Maybe one further opportunity to ML could be the injection of Knowledge Graph representations into to the actual training process, as it is done increasingly for NN approaches. Maybe this could also be an option for traditional ML?
For NN-based KG approaches, it might be worth including recent neural-symbolic reasoning endeavors, such as Makni, B., & Hendler, J. (forthcoming). Deep Learning for Noise-tolerant RDFS Reasoning. Special Issue of the Semantic Web Journal on Semantic Deep Learning, which represent KGs as "graph words". It might also be an idea to reference Pascal's vision paper "Neural-Symbolic Integration and the Semantic Web".
For computer vision, the same special issue also features a paper on combining detection with reasoning in a loop - maybe interesting for "Approaches"? Alirezaie, M., Längkvist, M., Sioutis, M., & Loutfi, A. (2019). Semantic Referee: A Neural-Symbolic Framework for Enhancing Geospatial Semantic Segmentation. Special Issue of the Semantic Web Journal on Semantic Deep Learning. The role of the footnote here is not entirely clear to me - is this where those ideas are taken from?
Robotics: potentially interesting recent approaches (the first actually address the named limnitation):
Pomarlan, M., Porzel, R., Bateman, J., & Malaka, R. (2018, November). From sensors to sense: Integrated heterogeneous ontologies for Natural Language Generation. In Proceedings of the Workshop on NLG for Human–Robot Interaction (pp. 17-21).
Siddharth Patki, Andrea F Daniele, Matthew R Walter, and Thomas M Howard. Inferring compact representations for efficient natural language understanding of robot instructions. arXiv preprint arXiv:1903.09243, 2019.
For multi-agent system approaches, formalizations of agent interactions by Marco Schorlemmer et al. (e.g. reference below) might be interesting:
Chocron, P., & Schorlemmer, M. (2018, July). Inferring commitment semantics in multi-agent interactions. In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems (pp. 1150-1158). International Foundation for Autonomous Agents and Multiagent Systems.
The NN NLP based opportunities are large the same as the NN opportunities, since NN archiectures are the predominant architectures in NLP currently. A tree like structure has been introduced to NLP tasks in form of dependency trees:
Socher, R., Lin, C. C., Manning, C., & Ng, A. Y. (2011). Parsing natural scenes and natural language with recursive neural networks. In Proceedings of the 28th international conference on machine learning (ICML-11) (pp. 129-136).
Also less towards explainability and more towards NLP+KG, but potentially interesting:
Petrucci, G., Rospocher, M., & Ghidini, C. (2018). Expressive ontology learning as neural machine translation. Journal of Web Semantics, 52, 66-82.
Some recommendations for cross-references within the special issue:
- Neural-Symbolic Integration and the Semantic Web http://www.semantic-web-journal.net/content/neural-symbolic-integration-...
- Leveraging Knowledge Graphs for Big Data Integration http://www.semantic-web-journal.net/content/leveraging-knowledge-graphs-...
- Machine Learning for the Semantic Web: Lessons Learnt and Next Research Directions http://www.semantic-web-journal.net/content/machine-learning-semantic-we...
- Neural Language Models for the Multilingual, Transcultural, and Multimodal Semantic Web http://www.semantic-web-journal.net/content/neural-language-models-multi...
Minor comments - general:
- Capitalization of research fields and applications should be made consistent. For instance, Constraint satistifaction and Search, Constraint Satisfaction and Search, constraint satisfaction and search. Additionally, Semantic Web as a term is generally always camel-cased, as is Artificial Intelligence.
- quotation marks are inconsistent, in Latex the easiest is ``and''
Minor comments - in order of appearance:
1.22 other areas such as => other areas, such as
1.23 distributed AI i.e. => distributed AI, i.e., (no comma before i.e. in several places)
1.24 is now referring => now refers
1.39 is addressing intelligence => and addresses intelligence
1.45 of the AI => of AI
1.48 general artificial intelligence i.e., => general artificial intelligence, i.e.,
1.34 more applicability dimension => more applied dimension
2.2 features attribution => feature attribution
2.32 from which emerged ... systems => from which ... systems emerged
2.35 most successful than others => more
2.40 a few work => works
2.41 semantic Web => Semantic Web (repeatedly)
2.41 linked data => Linked Data
2.12 are you sure you mean arbitral limits? what would that be for a taxonomy?
2.17 Neural Netwok => Networks
2.30 a mathematical models => omit a
2.37 explicit their rational => rationale
2.46 approaches limits => limit
2.50 features importance => feature
3.30 better representation of data => representations; an ML model
3.44 with high number => with a high number
3.46 which fit better => which better fit images and texts
3.48 are strong focus => are a strong focus
3.26 [20] such as => [20], such as
3.37 architectures needs => need
3.39 aims => aim
3.46 Figure ??
3.47 a central roles => role
4.30 is relying => relies
4.32 reconstruction, visual => reconstruction to visual
4.38 referred as => referred to as
4.42 many variant => variants
4.46 salience map => maps
4.51 However integrating => However, integrating
5.6 a NP => an NP
5.10 [26], [27] => [26, 27]
5.2 randomization feature values coalition => ? randominzation of?
5.4 As recently explored structured => explored, structured
5.33 [34] Some => ?? omit [34]?
5.44 remains => remain
6.19 rational => rationale ?
6.42 artificial intelligence => Artificial Intelligence
7.12 as rational is usually is => omit is and rationale?
7.17 Knowledge graph => graphs
7.4 Semantics could support for representation purpose => ?
7.26 depending of => on
|