Review Comment:
This manuscript was submitted as 'Tools and Systems Report' and should be reviewed along the following dimensions: (1) Quality, importance, and impact of the described tool or system (convincing evidence must be provided). (2) Clarity, illustration, and readability of the describing paper, which shall convey to the reader both the capabilities and the limitations of the tool.
The paper presents an infrastructure for probabilistic reasoning with first-order logic based on a Markov logic engine ROCKIT. It introduces the system and also describes the technical details on how to incorporate logics into the Markov model. Several applications of the proposed framework including ontology matching and knowledge-based verification have discussed and the results validate the effectiveness of the presented framework.
Personally, I enjoy reading this paper. It is well written and the organization is good. Though the paper has a favor of application on the top of an existing engine ROCKIT, the authors still give a clear description on how to define the formalize the problem into an optimization problem with logic-based constraints. In addition, the paper provides some evaluation for the presented framework using some applications such as ontology matching.
Some suggestions:
- First, regarding Markov logic network, the authors may want to cite Pedro Domingos’s work on markov logic network, who was awarded SIGKDD innovation award in 2014 because of his contribution on markov logic network. It is better to add more discussion on the difference or advantage of the presented markov logic network.
- Second, how about the efficiency of the proposed infrastructure? In fact, the first application coming to my mind based on the presented framework is semantic search. However, for search, efficiency is one key issue. It would be helpful to discuss more about efficiency.
- Third, still taking about the evaluation, I agreed that it is useful to use two applications to demonstrate the generality of the presented framework. However, each evaluation is a bit simple. My suggestion is to extend the evaluation of the ontology matching. It can be extended by either comparing with more methods or other tasks in OAEI 2013. This will make the evaluation more convincing.
|