Review Comment:
Overall evaluation
Select your choice from the options below and write its number below.
== 3 strong accept
== 2 accept
== 1 weak accept
== 0 borderline paper
== -1 weak reject
== -2 reject
== -3 strong reject
1
Reviewer's confidence
Select your choice from the options below and write its number below.
== 5 (expert)
== 4 (high)
== 3 (medium)
== 2 (low)
== 1 (none)
4
Interest to the Knowledge Engineering and Knowledge Management Community
Select your choice from the options below and write its number below.
== 5 excellent
== 4 good
== 3 fair
== 2 poor
== 1 very poor
4
Novelty
Select your choice from the options below and write its number below.
== 5 excellent
== 4 good
== 3 fair
== 2 poor
== 1 very poor
3
Technical quality
Select your choice from the options below and write its number below.
== 5 excellent
== 4 good
== 3 fair
== 2 poor
== 1 very poor
4
Evaluation
Select your choice from the options below and write its number below.
== 5 excellent
== 4 good
== 3 fair
== 2 poor
== 1 not present
3
Clarity and presentation
Select your choice from the options below and write its number below.
== 5 excellent
== 4 good
== 3 fair
== 2 poor
== 1 very poor
4
Review
Please provide your textual review here.
The paper describes VOWL 2, developed based on a user study carried out with the first version. VOWL aims to support exploration of ontologies using a dedicated visual notation for graph visualisation. Its two implementations are targeted especially at casual users of ontologies. The authors carry out a good review of relevant ontology visualisation tools, and, by identifying gaps in requirements for support for the kinds of tasks users would carry out, describe how their implementation compares with existing tools.
The paper concludes with a discussion of areas in which the implementation could be improved.
The paper is, overall, well written and easy to follow. However, the discussion/conclusions in section 5 are a bit empty. In fact, what they really do is simply continue the previous section on results. I really can't see what new is said here. Maybe clearly stating in the introduction, or at the start of the evaluation section what key areas were to be addressed in this study, esp. wrt to EKAW, and then returning to those in the discussion might help? As it stands the evaluation seems to simply look at general usability. It could also be that I'm missing what was key because the evaluation tasks are not fully described (see comments also below).
**********
I disagree with the second half of this statement: "Many approaches visualize ontologies as graphs, which is a natural way to depict the concepts and relationships in a domain of knowledge." - this is natural ONLY if that actually maps to the structure of these relationships. This IS the case here, but the statement is general. And it appears to be the justification for the visualisation approach selected. (Later in the paper the authors make a similar statement but then specify that it is with reference to this particular type of data structure.)
I'm not completely clear, apart from implementation tools used and the latter being a plug-in, what the differences are between WebVOWL and ProtégéVOWL. If the latter had been available for evaluation would it have been worth comparing them against each other? OR would the results have been the same with either (everything else being equal)?
Also not clear if scaling IS available or not for the circles, or if this is saying that there are multiple options for the user to choose from.
I am conscious of space restrictions, otherwise would be useful to have a copy of at least one of the graphs in Fig. 2 in VOWL 1 to illustrate clearly where the changes are, especially as there are a few differences between what is shown in Fig 2a and of the MUTO ontology (Fig. 3) in the previous paper.
Evaluation
The impression is given that previous versions of SOVA do display datatype properties - if so, why not simply use the last version that did? Simply because this does represent a confounding factor. Unless, of course, there are improvements in the newest version with greater influence on the tasks evaluated?
The evaluation is described as within-subjects. However, this then follows: "In combination with the counterbalancing of the visualizations, this resulted in a setting where each task had to be solved for each of the visualizations by some of the participants. " - if "some" then must've been between, not within.
What were the actual tasks - the number and distribution is given, but not the tasks themselves, apart from the first set examples of TYPES of questions. A bit difficult to interpret the results otherwise. Were the same questions/tasks repeated for each ontology? Were they modified based on size or complexity?
"Overall, VOWL 2 was assessed to be well readable due to the comparably low number of edges and, in particular, edge crossings. This effect was achieved both by avoiding several edges present in other visualizations, such as between equivalent classes, and by applying the splitting rules for node multiplication."
Would this not have an effect on tasks that require a user to identify equivalent classes and/or their properties, for example? What alternatives exist to handle such cases?
"Two more participants asked for a clarification, as they wondered whether there would be multiple copies of the double-ringed equivalent class circles, one for each equivalent class, or only one in all." - do they or do they not - I can't tell either, not from the text.
"Looking at the multitude of approaches ..." - a fair number of approaches are discussed, but "multitude" is a lot more than would fit in a paper.
"were able to solve the vast majority of the tasks ..." - there were only 18 tasks, I wdn't call that vast...
"most of the graph would align itself automatically according to the users' input. " - align itself to what? What is the users' input here?
"Nonetheless, the force-directed layout led to highly connected nodes that could be easily identified, as intended; in this case, the significant, deeply integrated classes of an ontology would clearly stand out. " - don't really understand what this is saying - what exactly is a "deeply integrated class"?
*** OTHER POINTS ***
"Both implementations support interactive highlighting and display additional ontology-related information on the selected elements on demand to complement the visualization." - what exactly is this information? Maybe highlight it in one of the diagrams?
" ...even though both SOVA and GrOWL include some formal symbols." - would be useful to provide a few examples and compare to how VOWL represents these in a more intuitive manner.
*** Minor grammatical errors, e.g., (not all)
"various of our enhancements" either "various enhancements" or "various [elements/aspects] of our enhancements" or replace "various" with "some"
"A smaller number of works provide ..." -> "A smaller number of work provide[s]" - no 's' - work is not countable, and verb matches "a smaller number" (the subject)
"overview over " -> "overview OF"
"... relevant to the study comprising of classes, properties" - "comprising classes..." (no 'of')
"without getting in touch with their formal representations ..." - you get in touch with a person (or your inner self), not an inanimate object, should say something like "without making use of formal representations... "
"come in contact with ..." -> "come into contact with ... three visualizations ..."- but, as above, really used with people or, say, a disease, something simple like "had used/tried any of the ... three visualizations ..."
"the introduction into ontologies." -> introduction to
"of any doubts" -> "of any doubt" - not countable
"Gaph visualizations" -> gRaph
automatical -> automatic
|