Review Comment:
This paper aims at providing better ranking of ontologies given a query concept. The authors present methods to calculate the centrality of concepts and the authority of ontologies. They include these two features, in addition to the traditional textual similarity, into the ranking algorithm called DWRank. A ground truth dataset has been used to train DWRank and tune the weights of the features. The authors evaluate DWRank by comparing it with existing benchmark approaches. Besides, the authors also compare the performances of DWRank with fixed weights and with learned weights. The evaluation result shows that DWRank outperforms existing approaches and can produce more meaningful ranking of the important concepts. This paper is well organized and provides step-to-step detailed description on the method and experiment design. The evaluation is concrete and comprehensive. I would recommend this paper for publication if the authors can address the minor issues listed as below:
Page 2, paragraph 2: "In this paper we propose a new ontology concept retrieval framework..." In this paragraph, the authors were trying to give an overview introduction of the framework, which is fine. However, the content of this paragraph seems to overlap with section 2.2. Thus, I would suggest that the authors shorten this paragraph, and only give some brief introduction here.
Page 8, section 4.1: a query string Q= {q1, q2, q3...}. I assume each q represents a single word in the input query. What if the query concept consists of multiple words? For example, the user might want to find an ontology for the concept of "natural disaster", would this search term be divided into "natural" and "disaster"? What would be the potential consequences in matching, since ontologies typically combine two words into one to represent a concept such as "NaturalDisaster"?
Figure 1: it is difficult to visually differentiate the rectangles (representing offline Learning) and index boxes with rounded edges. Maybe the authors can use ellipses to represent indexes.
Index for equation 3 (immediately below equation 2) is missing, and the following equation indexes are accordingly misordered.
Algorithm 1: line numbers are missing.
The very short sentence below section 5.2 is unnecessary and can be removed.
Page 10, section 5.2.1, experiment-1, point 2: there is a typo in "a LTR algorithm, a ranking model is learnt *form* the hub score, the authority score and the text relevancy ...", which should be "from".
Table 4: there is a duplicated "51" for the SSN ontology.
Page 12, right text column at the top: "This can be seen in Table 7 that presents the top 5 concepts of the FOAF ontology ranked by HubScore and CARRank." "Table 7" should be "Table 6". All the indexes, including figures, equations, and tables, should all be double checked.
Page 15, the future work paragraph: the authors so far only mentioned the efficiency issue, and some more discussions could be used. For example, is it possible to include an online learning process into the ranking model? Such a model may constantly improve its performance based on the feedback of users (instead of using only a pre-trained offline model).
|