Triple Confidence-aware Encoder-Decoder Model for Commonsense Knowledge Graph Completion

Tracking #: 3091-4305

Authors: 
Fu Zhang
Hongzhi Chen
Xiang Li
Jingwei Cheng

Responsible editor: 
Guest Editors Commonsense 2021

Submission type: 
Full Paper
Abstract: 
Commonsense knowledge graphs have recently gained attention, particularly in AI applications, where the structured commonsense provides an interpretable and scrutable resource that can be used to avoid misfortunes like the recent Microsoft Tay chatbot PR disaster, stemming from reliance on learning-only approaches [1]. However, a large amount of valuable commonsense knowledge exists implicitly or is missing. In this case, commonsense knowledge graph completion (CKGC) is proposed to solve this incomplete problem by inferring the missing parts of the commonsense triples, e.g., (?, HasPrerequisite, turn computer on) or (get onto web, HasPrerequisite, ?). Some existing methods attempt to learn as much entity semantic information as possible by exploiting the structural and semantic context of entities for improving the performance of CKGC. However, we found that the existing models only pay attention to the entity and relation of the commonsense triple and ignore the important confidence (weight) information related to the commonsense triple. In this paper we innovatively introduce commonsense triple confidence into CKGC and propose a confidence-aware encoder-decoder CKGC model. In the encoding stage, we propose a method to incorporate the commonsense triple confidence into RGCN (relational graph convolutional network), so that the encoder can learn a more accurate semantic representation by considering the triple confidence constraints. Moreover, as well known the commonsense knowledge graphs are usually sparse, because there are a large number of entities with an in-degree of 1 in the commonsense triples. Therefore, we propose to add a new relation (called similar edge) between two similar entities for compensating the sparsity of commonsense KGs. In the decoding stage, considering that the entities in the commonsense triples are sentence-level entities, we propose a joint decoding model by combining the InteractE and ConvTransE. Experiments show that our new model achieves better performance compared to the previous competitive models. In particular, the incorporating of the confidence scores of triples actually brings significant improvements to CKGC.
Full PDF Version: 
Tags: 
Reviewed

Decision/Status: 
Reject (Two Strikes)

Solicited Reviews:
Click to Expand/Collapse
Review #1
Anonymous submitted on 27/Apr/2022
Suggestion:
Minor Revision
Review Comment:

This paper presents an approach for commonsense knowledge graph
completion using an encoder/decoder model.

This reviewer's major criticisms were that the details of their
approach could be strengthened. The authors added a new set of
experiments on ATOMIC. They also made changes for more descriptive
captions: they replaced the description of Figure 2, Table 1 and Table
6 to be more informative. They also added an explanation of Table 7.

One of the contributions of the papers is an appropriate importance
weight. The authors changed the language to make their contributions
more clear. They also updated the data in Tables 4 and 8. The
authors also replaced Table 4 and 8 to add in normalizing the weights
from ConceptNet.

The authors revised the typos, grammar errors and spelling errors
pointed out by both reviewers. Most of the technical concerns were
addressed. Although the paper has greatly improved, this reviewer is
still concerned about the contributions over Malaviya et al., which
was fully detailed in the reviewer comments, but not yet in the
revised paper. I believe with this addition, the paper would be much
stronger and ready for publication.

Review #2
Anonymous submitted on 20/May/2022
Suggestion:
Major Revision
Review Comment:

In this work, the triple confidence-aware encoder-decoder model is the model utilized in software and AI development to help facilitate the prediction of software actions. As such, what is proposed here is the binary model that would create three encoder-decoders into the creation of one object. What is seen in the article, the confidence relational graph is measured primarily through the encoding model which is proposed for large scale data use. he decoding stage in itself is different than the encoding function, primarily because during the encoding stage we see the optimization of entity embedding. Obtaining low-dimensional entity that combines entity semantic information and structural information.
As such, with these parameters, the results that are derived from the implementation of the triple confidence-aware encoder-decoder model on Google Knowledge graph is that the authors’ new model performed at a higher function, thusly achieving higher performance. In the end, the author’s proposed that their first work introduced the commonsense triple confidence into CKGC. This is used to help to help integrate the model and recognized neighbor entity information in order to earn a more accurate semantic representation. Furthermore, their model was able to perform at a higher function, achieving better results.
--------------------------------------------
The following comments are provided as per originality, significance of results and quality of writing.
A. Originality: This paper is based on a cumulation of work that has been done over the years, using several sources as well as a well-known brand name like Google to formulate the paper. Much has been covered on this topic, especially on the topic of information sciences and neurotechnology. Chronicling much development and the usage of the Conv/TransE model to translate the results.
B. Significance of the Results: The significance stems from the fact that the authors uses a decoding structure. As such, the decoding structure is at a higher result for the function. Moreover, the in-depth research and results have found several incompleteness of the training samples. Thus, there is impact on the confidence scores.
C. Quality of Writing: The writing is fairly detailed in its explanations and reasoning. Using many different sources to help back up its arguments on this paper. Starting on the explanation of the necessity of research, as well as using multiple examples, figures and charts, the authors were able to explain their paper well. In all, the quality of writing is good and should just be subjected to minor spellchecks etc.
------------------------------------------------------
The main comments for improvement are as follows.
--- Introduction section needs a motivating example in order to provide a solid foundation for the research problem. It also needs to include the Layout of the Paper, towards the end.
--- In the Experiments and Results section, a discussion on results should be included as a separate subsection.
--- Some perspective on Applications needs to be offered, e.g. in the Experiments and Results section as yet another subsection, or as an opening paragraph in the Conclusions or some such.
--- Conclusions need to be further elaborated in order to emphasize the authors' contributions, and highlight future work. Include a list of bullets for each of these aspects to enhance reader appeal.
--- Related Work section needs more discussion. it seems rather terse for a journal article.
Include the following additional references that will further enhance the article. Please find the full author list, page numbers etc. I am just including et al. here
1) Tandon et al. Commonsense Knowledge for Machine Intelligence, ACM SIGMOD Record, 2017, https://dl.acm.org/doi/abs/10.1145/3186549.3186562
2) Tao et al. A Confidence-Aware Cascade Network for Multi-Scale Stereo Matching of Very-High-Resolution Remote Sensing Images, Remote Sensing 2022, doi: 10.3390/rs14071667
3) Puri et al. Commonsense Based Text Mining on Urban Policy, LREV journal, 2022, https://link.springer.com/article/10.1007/s10579-022-09584-6
4) Xie et al. Detect Incorrect Triples in Knowledge Base Based on Triple Confidence Evaluation, ICIBE 2017 (International Conference on Industrial and Business Engineering), doi: 10.1145/3133811.3133829
5) Onyeka et al. Using commonsense knowledge and text mining for implicit requirements localization, IEEE ICTAI 2020 (Intl. Conf. on Tools with Artificial Intelligence), https://ieeexplore.ieee.org/abstract/document/9288192
6) Antifakos et al. Towards improving trust in context-aware systems by displaying system confidence, MobileHCI Conference 2005, doi: 10.1145/1085777.1085780.
--------------------------------
Making revisions to the article based on the above comments will help to provide improvements. Thereafter the revised article after major revisions can be reconsidered for publication. The authors have done a very good job, and are highly encouraged to submit a revision, which will further enhance the appeal of the article.

Review #3
Anonymous submitted on 31/May/2022
Suggestion:
Major Revision
Review Comment:

This paper has gone through a second round of review but the reviewer is not convinced that the presentation of the paper is good enough for a publication in this journal.
- The motivation, problem description, main contributions and methodology are not clearly described. Furthermore, the paper is still disrupted frequently by its language errors.
- The discussions of the experiments should be expanded substantially in order to understand the strengths and weaknesses of the proposed method.
- It is hard for the reviewer to justify the claimed results as there is no code repository provided by the authors.