Matcha-DL a tool for supervised ontology alignment

Tracking #: 3648-4862

Authors: 
Pedro Giesteira Cotovio
Lucas Ferraz
Daniel Faria1
Laura Balbi
Marta Contreiras Silva
Catia Pesquita

Responsible editor: 
Guest Editors OM-ML 2024

Submission type: 
Tool/System Report
Abstract: 
Ontology Matching is a critical task to establish semantic interoperability given the proliferation of ontologies and knowledge graphs with overlapping domains. While traditional Ontology Matching relied on heuristics and rule-based approaches to find corresponding entities between knowledge resources, recent advances in machine-learning have prompted the community to contemplate matching approaches that exploit machine-learning algorithms. We present Matcha-DL, an extension of the matching system Matcha to tackle semi-supervised tasks using machine-learning algorithms. Matcha builds upon the algorithms of the established system AgreementMakerLight with a novel broader core architecture designed to tackle long-standing challenges such as complex and holistic ontology matching. Matcha-DL uses a linear neural network that learns to rank candidate mappings proposed by Matcha by using a partial reference alignment as a training set, and using the confidence scores produced by Matcha's matching algorithms as features. Matcha-DL was evaluated in the 2022 and 2023 editions of the Bio-ML track of the Ontology Alignment Evaluation Initiative, achieving the highest F1 score in 4 of the 5 semi-supervised tasks. Furthermore, it was shown to benefit more than other competitors from the contextual information of ontologies.
Full PDF Version: 
Tags: 
Reviewed

Decision/Status: 
Major Revision

Solicited Reviews:
Click to Expand/Collapse
Review #1
By Jérôme Euzenat submitted on 29/Apr/2024
Suggestion:
Major Revision
Review Comment:

The paper presents the MatchaDL ontology matching system that uses a neural network to learn how to rank correspondences between ontologies from a partial alignment.

I am favourable to the publication of this paper. However there are some points mentioned below that would deserve, and for some of them are required, to be addressed before that.

The system performs well in the tasks it is concurrently evaluated, so it is natural to publish its description.
The use of some machine learning techniques in ontology matching is relatively novel, especially in what concern systems based on languages such as large language models or embedding in otherwise generated such structures. The impact of such a system can be large and as large as it is widely and openly available.

The text in general is clear and readable (some suggestion to improve it are given below).
However, the description itself is not very deep. More information should be provided to give the reader the opportunity to understand how the system works.
More generally, the paper could be improved by making it more precise and self-contained (suggestions below).
I would also have liked to see a discussion of the limitations of the system or the evaluation setting. In particular, it seems very reliant on lexical comparison. Could Bio-ML be extended in order to match ontologies with different lexicons? Is Matcha-DL really a general purpose system if it assumes such a lexical similarity? Is there evidence that the matcher selection is sensitive to this, i.e. would give lower weight to lexicon-based matchers if lexicons are different?

Suggested improvements follow in the order of the paper:
- page 2, line1-3: it is unclear what is made by this sentence for someone who does not already know: explain that form learning, examples may be necessary, hence tis required to have specific tasks for that.
- line 28: 'unsupervised' This term is usually reserved for machine learning operation, since this is not what the majority of systems are, it would be better not use it in this context or to precise that this would correspond to an unsupervised setting for a learning system.
- line 19-20: explain what is new in Bio-ML that allows it to evaluate learning systems.
(all the above comments are about the same thing: the paper takes for granted that the reader knows).

- page 3, line 48: search space escalate exponentially? Actually given that the correspondences are between pairs of entities with a limited number of relations, when one entity is added, this increase the search space only by the size of the other ontology. This is more than geometric growth because this size is constantly growing, but this is subexponential.
- page 4: a picture illustrating the process involved in MatchaDL would help the reader understanding.
- page 4, line 6: 'optimal combination' would be worth describing the optimisation criterion
- page 4, line 14: 'describe' is a bit strange 'extract' or 'generate' seems to be better.
- page 4: it would be worth describing what is called the 'lexicon' of an ontology in the context of Table 1.

- page 5: it does not seem worth describing again the formulas for the classical measures, rather give references instead of repeating them.
- page 5, line 31: it is unclear what are 'null reference mappings. It seems to refer to the 'semi-supervised setting' but this does not seem to have been described before.

- page 5, line 6: larger then -> larger than
- page 5, line 46: 'friendly' the term does not feels very accurate. ML-oriented would definitely be better (since non learning systems would not use the partial reference, they are indeed ML-oriented)

- pages 5-6: it would be better to describe the data sets in just one enumeration, instead of their list and then their description.

- page 7, line 42: no hyperparameter tuning as is customary?

Concerning the URL provided as a 'long term stable link', this is a github link so fully appropriate for code. However, as far, as I can judge (commit 8e365b6):
- the README.md file is reduced to its simplest expression and does not provide me any instruction about how to use the system.
- the source code is not available in the main branch... this is a disturbing remark, as the repository seems empty (the code, at least some of it, is in the dev branch.
- the repository contains files automatically generated, which is, in principle bad practice: the code should work and thus be able to regenerate them.

These issues should be definitely fixed before publishing a 'Tools and Systems Report'.

Review #2
Anonymous submitted on 03/Jun/2024
Suggestion:
Major Revision
Review Comment:

This paper introduces a new ontology matching system, called Matcha-DL. The proposed system has been built on the top of well-known infrastructure (other matching systems, such as AML and Matcha) to address the semi-supervised matching tasks using machine learning techniques. The proposed system has participated in OAEI 2022 and 2023, in particular Bio-ML track, and it achieved good results compared to the other matching systems participating in the same track. However, the paper is NOT ready to be accepted in the current version, as a lot of technical details are missing and in-depth discussion of the results.

In general, the paper is simple and easy to follow and it has four main sections beside the abstract and conclusion.

- Introduction.
-- In general, this section needs to re-organize and structure by adding a clear motivation behind developing a new matching tool, what are main challenges faced during developing the new tool? and what are the main contributions of the paper?
-- As it is not a new topic, as described 20 years ago in "Ontology Matching: A Machine Learning
Approach" by AnHai Do, it would be good to focus more on differences between the current exploiting of machine learning in OM and the previous work?
-- It starts with a strange defintion for ontologies "Ontologies are digital resources that formalize domain knowledge in a manner that is both machine-readable and human interpretable..", what does it mean by digital resources?

- Related work: I like this part, but some minors issues need to be handeled, such as what Os and Ot? More explanation for global matching and local ranking?

- MatchaDL: This is the weakest part in the paper, as it misses a lot of technical details about the matching tool? A figure to illustrate the tool component will be useful and later in-depth description for each component.

- Evaluation
-- This section starts with evaluation criteria, it would be good to follow that with some information about ontologies used in the evaluation. I know it is from BioML, but it would be good to make the paper self-described
-- In-depth discussion of the results is also missing. I will give one example, in Table 2, there is a different between evaluation criteria (P, R, F1) in 2022 and 2023 without explaning why this different has been obtained?

Review #3
Anonymous submitted on 30/Jun/2024
Suggestion:
Minor Revision
Review Comment:

This paper presents the results of the Matcha-DL system on the BIO-ML track of OAEI.
The system participated in OAEI 2022 and 2023 in the mentioned track and scored best in 4 out of 5 test cases.
The paper is well-written and easy to follow.
Very detailed system descriptions are not contained, but the corresponding papers are referenced.
It would be good to mention at least the features which are finally used and a short describtion (like in Table 1).

The evaluation is very good because in OAEI all systems are compared using the exact same evaluation metric. Thus, one can be sure that the results are comparable.

A few things could be still improved and more precisely described.
In Section 4.1.1. Hyper-parameters.
- Is the Feature Cardinality really a hyper parameter? Is is not fixed by the systems' pipeline?
- Can the feature and filter Threshold not be computed by the training alignment (including positives and negatives)?
With that, the number of hyper-parameters would be reduced.

It would also be interesting to see which of the features play a role.
This could be achieved by using any ML model that can provide a feature importance.
For a closer look, a decision tree would also be interesting (depending how good the predictions are).

I think those updates are rather small, and thus, a minor revision of the manuscript should be fine.

Spelling corrections:
- in complex domains, such as the biomedical [domain].
- Section 2.1: that creates the final alignment by combining this [these/those] preliminary matches
- Section 2.1: either by projecting then [them]