Preventing Interoperability Problems Instead of Solving Them
Review 1 by Martin Raubal:
The paper tackles an important problem and suggests collaborative development of open source vocabularies. I'm just not too sure how feasible this approach is. On the other hand, this should also be a vision paper, so that's perfectly fine. The whole approach reminds me of developing standards that everyone needs to follow but as we all know it is not that easy. How will you ensure that everyone follows? Same with ontology development, it seems like everyone wants to develop his / her own spatial ontology (for example).
I would also like the authors to address the following problem: What happens if standards change? How can you take care of that, will you have to change the vocabulary accordingly? How to keep up-to-date on these issues? Also: What happens when new concepts are defined?
The whole approach fits perfectly to the idea of a data infrastructure (see all the recent efforts and problems of establishing spatial data infrastructures (SDI)).
I really like the view that vocabulary work is as much a social process. It would be great if you could expand on that a bit, it is really important and often forgotten!
Review 2 by Giancarlo Guizzardi:
The main claim of the paper, namely, that "instead of solving interoperability problems we should make a big effort to avoid them" is certainly a very important research goal to be pursued. For this reason, I am happy to see a discussion addressing this topic in this inaugural issue. Exactly because of the importance of the topic, there are some issues in this particular presentation of the article that can be improved for clarification.
Firstly, I feel that the scope of the proposed solution should be better characterized in the presentation. The major reason for Interoperability problems comes from people committing to different conceptualizations which are not completely manifested via a representation (leading to the so-called "False Agreement Problem"). The point is that this problem can also appear in collective modeling if people falsely believe that they share the same conceptualization which is represented by a shared model (artifact). In other words, if the representation mechanism used to build this shared model is not expressive enough to make explicit the difference between ontological commitments, one can easily run into the false agreement problem even if the model is collectively constructed.
I understand that the page limit is insufficient to allow for an in depth discussion on the details of the proposal. However, in the current presentation, it is not easy to understand how the referred solution is distinct from existing centralized solutions in practice. Moreover, as previously mentioned, there are many serious semantic interoperability problems that can hardly be solved using only lightweight ontologies and fully automated derived mappings between them. In the end, the paper gives the impression that discussed solution focus more on Terminological Interoperability than on Semantic Interoperability.
In page 3, when discussing the Ten Commandments, the author writes [commandment 1] "Add machine semantics. Start transforming thesauri into machine interpretable (lightweight) ontologies in order to boost their usage on the Semantic Web." In my opinion, the importance given to this commandment is exaggerated. For the sake of interoperability, it is much more important to have fully expressive reference models (perhaps lacking tractable machine processable semantics) than having shallow lightweight ontologies which are computationally interesting.
In the same paragraph commandments 4 and 5 are prescribed: "4) Reuse the others"™ work. 5) Maintain interoperability with the past and other ontologies. Otherwise benefits of collaboration are lost." We have a situation now in which many existing domain ontologies lack both expressivity and truthfulness regarding the underlying domain. In contrast, many of these ontologies tend to be strongly biased by computational and/or application-specific concerns. Building an interoperability model that aims at aggregating the "maximum common denominator" of all envisaged models can cause that all the unwanted biases present in these specific ontologies are imported to the shared model. This bottom-up approach for reference model construction is very common in metamodeling and it requires a very meticulous human intervention with the proper methodological tools so that it can be properly circumvented (something which in a sense is the opposite of complete automation). For this reason, I would hesitate to prescribe these as rules to be generally applied (as the term commandment suggests).
Regarding your list: Why does 6) keep funding agencies happy? Should this be what drives research? ad 7) The question is how to integrate them!
Overall this is a good paper, maybe you could be a bit more generic in your conclusions and the overall vision.
There are several language problems that should be fixed:
p1: "link their own content"; "as suggested by the CIDOC CRM or FRBR..."; "on the web scale in the Linked Data..."??; "more semantic confusion"; "to study." missing space afterwards;
p2: "hearth" should be "heart"; "vocabularies can be aligned";
p3: "4) Reuse others'..."; "idea is to provide"; "systems through REST"; "raising from"; "is production use"??;
p4: "from tens of memory"??; "only over time";