Review Comment:
This submission introduces the notion of categorization power of an ontology, discusses how it can be computed, and performs an empirical evaluation that involves both automated computation in ontology repositories and cognitive experiments with humans.
To compute the focused categorization power (FCP) in an ontology O for a concept FC, one, roughly speaking, counts the number of “interesting" subconcepts of FC that one can build. The counting is weighted depending on how “interesting” individual subconcepts are. The authors provide some hints on what “interesting” could mean, e.g., the subconcepts should not be equivalent to FC.
Intuitively, FCP can be used to measure how much knowledge about the concept FC is contained the ontology O. This can be used to select from a collection of ontologies an ontology that is most suitable for some context (specified using FC).
The submission touches an interesting problem, but unfortunately I believe the paper and results are not of sufficient quality and depth to be accepted at a journal.
Let me just point out some of the problems.
1. The paper would be inaccessible to the general audience. This is supposed to be a journal publication, but I don’t think that an ordinary PhD student or a young PostDoc would be able to learn much from this paper. After reading the introduction, as a more senior researcher, I could only get a vague impression of the motivation and, especially, the results and insights of the paper. In fact, the authors don’t make a serious attempt to provide an overview of the scientific contributions of the paper. Some bits are presented in the comparison with the previous conference paper, some bits are presented when discussing the structure of the paper. Please provide a clear, substantial, and complete discussion of the contributions of the paper.
2. The presentation of technical details is too vague. The paper deals with a technical problem that is related to the automated generation of concept expressions, but the authors do not provide sufficient background details to make the discussion precise. In the end, the proposal on how to compute FCP is made informally by presenting some design suggestions in Section 2.3.-2.5. But this is a key part: without something concrete regarding weights, I don’t see much value in the proposal, because at the current abstract level it is simply trivial. I think that the authors at least should come up with a concrete proposal on weight computation, i.e. a concrete instantiation of what is now just an idea/framework. I find the examples of the paper not helpful because they are also very vague. Perhaps, one could make them more precise, by having a concrete pair of ontologies and then comparing them according to FCP in some concrete setup.
3. Definitions 1-3 introduce some machinery for constructing and manipulating DL concept expressions. As a person familiar with DL literature, I simply cannot understand why these definitions deviate so much from the standard notions in DL literature (why “restrictions”, why “place holder variables”, why "concept expression types”, why “substitutions”?). DL literature offers well-established notions, notation, and nomenclature; I think one can and should employ them directly in this paper.
4. As mentioned, the paper deals with the task of generating DL concept expressions. There is a vast literature on this in the area of DLs. This is often called “non-standard reasoning tasks”, among which the tasks of computing "most specific concepts" or "least common subsumers” are probably most well known. Another task is "learning concept expressions”, which also has received significant attention. The challenges that one is facing there are similar to the ones of this paper (e.g., the infinite search space in general). A notion that is related to L-categories is that of “downward refinement operators” (e.g., in works of Lehmann & Hitzler). As a motivation, the authors write “A large part of the use cases of ontologies on the web consists in assigning data objects to certain categories (…). Furthermore, prior to the assignment, the objects are already known to be instances of some (more general) class, to which we will refer as the focus class (FC).” The above mentioned task of computing "most specific concepts” is specifically geared towards supporting such an assignment of objects to categories.
It seems that the authors are not aware of these works in the DL literature. I am not saying that specifically the notion of FCP has been considered already, but tasks with similar underlying technical challenges have surely been studied, resulting in what I believe are more sophisticated approaches than described in the submission.
5. The current shape of Section 4.3 is just unacceptable for a journal publication. One needs to make the algorithms more precise.
6. In Section 5.3 the authors write “From the point of view of focused categorization, logical conjunctions are actually not very interesting, since the conjunction can be simply achieved by applying multiple categories on the categorized individual.” To me this is a strong indication that the proposal has fundamental problems. E.g., specifically using conjunctions one will usually create different complex concepts that best describe a given collection of objects. If conjunctions are not interesting, I don’t see how the proposed framework can potentially be interesting. This, e.g., goes against the basic ideas in the area of learning concept expressions from data.
7. In Section 5.3 the authors write “In all, the analysis suggested that the L_SE types play a significant role in the family of all anonymous expressions commonly used in OWL [A], and that the design of an FCP formula restricted to this simple CEL is thus meaningful [B].” I don’t understand how can the authors conclude [B] from [A]. Intuitively, the shape of concept expressions for measuring FCP should be closely related to the kind of queries that users can pose. Users will not be interested only in simple atomic queries, they can pose more complex queries, e.g., conjunctive queries or full-fledged SPARQ queries. This suggests that it is imperative to consider also CEL where expressions can be significantly more complex than the expressions commonly found in ontologies. This is related to Point 6 above; I don’t see how one can have a meaningful approach without integrating conjunction.
|