Review Comment:
The authors have improved the paper based on the reviewers’ comments. However, I remain unsatisfied with the related work section. In particular, for the ‘replacement test’, there already exists work comparing different LLMs on the same KE tasks, including KG completion and reasoning [1], ontology learning [2], relation extraction [3], ontology generation [4], ontology matching [5]...
[1] Li, Qian, et al. "LLM-based multi-level knowledge generation for few-shot knowledge graph completion." Proceedings of the 33rd International Joint Conference on Artificial Intelligence. Vol. 3. 2024.
[2] Mai, Huu Tan, Cuong Xuan Chu, and Heiko Paulheim. "Do LLMs really adapt to domains? an ontology learning perspective." International Semantic Web Conference. Cham: Springer Nature Switzerland, 2024.
[3] Zhang, Bohui, et al. "Using large language models for knowledge engineering (LLMKE): a case study on Wikidata." arXiv preprint arXiv:2309.08491 (2023).
[4] Llugiqi, Majlinda, Fajar J. Ekaputra, and Marta Sabou. "From Experts to LLMs: Evaluating the Quality of Automatically Generated Ontologies." 2nd Workshop on Evaluation of Language Models in Knowledge Engineering (ELMKE), co-located with ESWC-25, to appear. 2025.
[5] Qiang, Zhangcheng, Weiqing Wang, and Kerry Taylor. "Agent-om: Leveraging llm agents for ontology matching." arXiv preprint arXiv:2312.00326 (2023).
|