Review Comment:
Overall evaluation
Select your choice from the options below and write its number below.
== 3 strong accept
== 2 accept
== 1 weak accept
XX 0 borderline paper
== -1 weak reject
== -2 reject
== -3 strong reject
Reviewer's confidence
Select your choice from the options below and write its number below.
== 5 (expert)
== 4 (high)
XX 3 (medium)
== 2 (low)
== 1 (none)
Interest to the Knowledge Engineering and Knowledge Management Community
Select your choice from the options below and write its number below.
== 5 excellent
XX 4 good
== 3 fair
== 2 poor
== 1 very poor
Novelty
Select your choice from the options below and write its number below.
== 5 excellent
== 4 good
XX 3 fair
== 2 poor
== 1 very poor
Technical quality
Select your choice from the options below and write its number below.
== 5 excellent
== 4 good
XX 3 fair
== 2 poor
== 1 very poor
Evaluation
Select your choice from the options below and write its number below.
== 5 excellent
== 4 good
== 3 fair
XX 2 poor
== 1 not present
Clarity and presentation
Select your choice from the options below and write its number below.
== 5 excellent
XX 4 good
== 3 fair
== 2 poor
== 1 very poor
Review
This paper describes an approach to guiding users through the process of editing the concepts in an FCA lattice based on some quality metrics. The intent is to then convert the lattice into an ontology in a later stage.
Despite some recurring grammatical issues, the paper is very clearly written, well organized, and engaging. The problems that the paper attempts to address are stated clearly in the introduction, and the description of FCA is intuitive enough that the paper stands alone even for non-experts. For each metric used in the system, both a formal and intuitive definition is given, along with how it relates to the goal of evaluation and refinement of the FCA lattice, an example value based on a simple lattice, and references to any similar ontology-based metrics.
The paper is lacking in two main areas: there is no discussion of related work and the evaluation is limited. For the related work, I expected a discussion of at least interactive ontology building and/or validation, even if there is no directly related work on FCA lattices. The analysis given in Section 6.2 draws conclusions based on only a single dataset. Most of the results presented in this section are negative, in that the human users did not behave in accordance with the metrics in many cases. The authors speculate on the causes for this and revised their algorithm to compensate, but then the paper says that these changes produced better results, with no further detail or supporting data.
Some small questions:
What does "detailed and prototypical of their abstractions" mean with respect to the discussion of middle level concepts on page 5?
Why was DM normalized when none of the other metrics were?
What drove the decision to use r-b for the attributes rather than the alternative mentioned of r and r^-1?
A couple of minor grammatical things:
Instead of "the tasks evaluation and refinement" it would be better to say "the evaluation and refinement tasks" or just "the evaluation and refinement"
Instead of "loose" it should be either "loss" or "lose" for most of your purposes, e.g. "which will lead to loose the smallest number" should be "which will lead to the loss of the smallest number" or "which will lose the smallest number"
I think that "The greater DIT value is, the deeper the abstraction level of the concept is and the more concepts are inherited from this concept." should possibly be "the more concepts are inherited *by* this concept", since concepts further from the top concept have more superclasses rather than more subclasses.
|