MantisTable UI: A Web Interface for Comprehensive Semantic Table Interpretation Management

Tracking #: 3748-4962

Authors: 
Marco Cremaschi3
Fabio D'Adda
Sara Nocco

Responsible editor: 
Guest Editors KG Construction 2024

Submission type: 
Tool/System Report
Abstract: 
Semantic Table Interpretation (STI) is crucial in various fields, including data analysis, knowledge representation, and information retrieval. Developing an efficient User Interface (UI) for managing and interacting with STI approaches would significantly contribute to the field of semantic data management by providing a practical and accessible tool, filling the current void in state-of-the-art solutions. To address these challenges, we introduce MantisTable UI, a comprehensive web-based interface designed to simplify the management of STI processes. This tool streamlines the user experience with its intuitive design and offers a powerful plugin system, allowing easy extension and customisation of its capabilities. The paper details the architecture and features of MantisTable UI, emphasising its modular design and user-friendly interface, and demonstrating how it can significantly enhance productivity and flexibility for researchers and developers working with tables. To assess the efficiency and usability of the platform, task-based testing and a questionnaire were administered to a group of participants. Results demonstrated that MantisTable UI was positively received for its functionalities and effectiveness in facilitating sophisticated table interpretation tasks, ultimately advancing semantic data management.
Full PDF Version: 
Tags: 
Reviewed

Decision/Status: 
Reject

Solicited Reviews:
Click to Expand/Collapse
Review #1
Anonymous submitted on 02/Mar/2025
Suggestion:
Major Revision
Review Comment:

Summary:
The paper presents a Web UI tool called 'MatisTable UI' for graphically managing semantic table interpretations. The tool (by default as a plugin) expands on 'MantisTable' (ESWC'19) and primarily provides an API for managing how the input data is processed. The tool also offers visual feedback to the user on the processing of their input. General use of a UI makes sense from a user perspective and is well motivated within the paper itself. The paper focuses a lot on the technical aspect of the tool but also provides a user study on the use of the tool.

In general, while the paper is well written, it sometimes reads as a tool that is designed as 'yet another' alternative to other STI tools instead of an improvement compared to others. The scientific merit in some of the design decisions is often missing, mainly due to requirements that are never formalised. It is not immediately clear if the tool is a 'better' alternative in comparison to others, or if it is merely an UI & API wrapper of MantisTable. These uncertainties are further highlighted by the supplemental material, which provides examples but lacks more details on how to create plugins.
(NOTE: there is the online documentation https://unimib-datai.github.io/mantistable-ui-docs/docs/plug-in/export, which goes in more detail; but the limitations and API is not well detailed)

While the focus of the paper is the graphical user interface tool; it sometimes reads as a technical overview of all the visual features and the clean UI it offers rather that its functional features. From a user perspective, the UI tool looks interesting, and I understand why the authors want to highlight and showcase the tool in the paper. From a researcher perspective, I want to see more elaborate design choices, more reasoning on the user evaluations. And finally, from a developer point of view, the tool seems to help in the rapid prototyping of new STI tools by preventing developers from spending time on the UI aspect. However, the strengths and limitations from the modular plugin system are not clear despite this being one of the main selling points of MantisTable UI.

My general recommendation is to have a major revision where:
1. requirements are formalised,
2. the modularity is further detailed (both in the paper and in the online repos), and
3. the evaluation are compared to other SOTA.
The tool itself stands out from the existing state of the art, but the parts that stand out are not detailed. More information and feedback on each section is provided:

Section introduction:
---------------------
The introduction provides an overview of the sections. The three main features/functionalities of MantisTable UI are highlighted. The improvements or changes compared to MantisTable V, MantisTable SE and MantisTable are briefly mentioned but not detailed. The introduction is clear and states the problem.

Section state of the art:
-------------------------
Major comments:
- The requirements are never formalised. For example, "table manipulation" is one of the non-satisfied functionalities. However, this functionality is never detailed, and the reader can only infer the scope of this functionality based on its use in OpenRefine, Trifacta and other state of the art.
- It is not immediately clear how 'MantisTable UI' differs from 'MantisTable'. Especially since MantisTable is still offered as a plugin, it is not clear that MantisTable UI focuses on the modularity and visualisation. Quoted from the paper: "Note that MantisTable is an STI approach provided with a GUI and, therefore, it differs from MantisTable UI. The latter is indeed a new proposal stemming from the experience gained over the past years, and that also derives from previous MantisTable developments". From this phrase alone it is not clear how it differs.

Minor comments:
- In the example that is used (Fig. 3), the georss:point is not valid (https://docs.ogc.org/cs/17-002r1/17-002r1.html#22). It would make more sense if the example also showed the transformation of data to a correct value to use this to compare the SOTA. Related work such as OpenRefine would support this data transformation (as well as MantisTable https://unimib-datai.github.io/mantistable-ui-docs/docs/plug-in/transformat
- Some of the SOTA focusses a lot on the technical limitations of existing tools. From the modularity point of view this makes sense - but since the modularity of MantisTable UI is only introduced later the focus of this comparison is not clear.

Suggested improvement:
1. Table 1 should include MantisTable UI to provide a quick visual overview of how it compares to the SOTA tools
2. The requirements should be formally introduced instead of combining them from all features in the SOTA. This will also help in understanding the focus on the limitations in existing SOTA.
3. The differences between MantisTable UI and MantisTable should either be detailed earlier, or not detailed at all until the 'implementation' section to avoid confusion
4. Fig 3. can still be used to illustrate the annotation process. But it should be a complete reflection of annotation and transformation.

Section approach and implementation:
------------------------------------
Major comments:
- The requirements of MantisTable are not clear. The section is and reads as the 'final outcome' of the requirements rather than requirements that are/should be defined at the beginning of the design of the tool. The authors clarify that the requirements are defined by the state of the art; but these are never formally introduced.
- Since modularity is one of the main features of the MantisTable UI tool, more details should be given on the possible implementations. It is a very interesting aspect of the tool, but not highlighted enough.

Minor comments:
- Table 1. in the state of the art: I would add MantisTable UI to this table to showcase what it eventually implements (maybe as another colour). In Table 2. the authors outline which requirements are satisfied but it is not visible at a glance how this compares to the other state of the art.
- Table manipulation
- Some parts such as the 'landing page' read as a non-functional overview of the tool rather than an overview of its functional features.

Suggested improvement:
- MantisTable UI shines in its modularity. However, too much focus is given on less important aspects such as 'the landing page' and the individual pages of the UI. Highlight the strengths of the tool so it not only attracts users of your UI, but also developers and researchers.
- Make it clear to the reader how the modularity of MantisTable UI helps to alleviate developer tasks.
- Related to the modularity: can the modularity help with the missing requirement of 'table manipulation'. If yes, this should be definitely mentioned.

Section efficiency and usability testing:
-----------------------------------------
Users are provided 6 tasks. 24 participants (12 expert, 12 non-expert) conducted these tasks and rated these tasks on a SUS scale from 1 to 5. Conclusion was that most tasks were successfully executed.

Major comments:
- How does the usability testing stack up against other tools with similar fulfilled requirements? Or in other words, is this UI better that other or is it simply usable?
- Task 6 is very platform specific. I would only consider task 1-5 to be statistically useful when comparing it with other state of the art.
- It is not clear how much information was provided to users. Did they have access to the online documentation, what was the
- The choice of participants is strange. Expert users makes sense, but the non-expert users (i.e., no data integration experience) are also users that would (potentially) never make use of the tool. Using these in the evaluation helps to assess the learnability of the tasks but does not provide insights into the usability of the tool.

Minor comments:
- Are the non-aggregated pseudonymized results available somewhere?

Suggested improvement:
- Make it clear how the usability of this UI differs from existing SOTA instead of a single point of view evaluation that evaluates the usability of the current tool without comparisons.

Supplemental material (GitHub repos & documentation)
----------------------------------------------------
Supplemental material includes the tool itself, the documentation and the github repo of the tool as well as repos for example implementations (plugins).

Comments:
- README documentation in the main repo could be more elaborate. Documentation for users is complete, but developers need more information on the GitHub repo itself. Provide more information on how to contribute new plugins and how these plugins can be discovered by users of the tool.
- Question: The project is licensed as AGPL, but with Apache-2.0 dependencies (drizzle-orm). Since the ORM is woven into the code, this is a potential license conflict for AGPL (?).
- I like that there are examples of export plugins (e.g., https://github.com/unimib-datAI/mtu-plugin-export/tree/main), but the documentation is very limited.

General comments (layout/typos):
--------------------------------
- Fig 1. (pg3. line 6): The first and last coordinates are padded with a 0, the 7 deg 48' 10'' E is not padded with a zero
NOTE: If it is supported; this typo can maybe used to let a plugin fix the incorrectly padded data?
- Fig 3. (pg3. line 17): similar issue with "Hohtalli" and its georss:point
- Fig 7.: I do not see much use of the landing page as a figure. I would rather focus on having a larger picture showcasing the imported tables
- In general, some important figures such as Fig 11. are small compared to the (over)use of UI screens that
are not very relevant to the core of the tool.
- There is an occasional mix of UK English and US English in the text. Examples of US EN:
-- Pg. 2 line 17: "lexicalization"
-- Pg. 6 line 12: "neighboring"
-- Pg. 7 line 40: "customize"

Review #2
Anonymous submitted on 20/Mar/2025
Suggestion:
Major Revision
Review Comment:

The manuscript presents MantisTable UI, a novel web-based interface designed to streamline Semantic Table Interpretation (STI) processes through an intuitive and modular graphical user interface (GUI). The tool addresses existing gaps in the field by enabling the efficient annotation and interpretation of tabular data, ultimately supporting Knowledge Graph (KG) construction and completion. Building on the authors' previous work on similar tools, MantisTable UI introduces a flexible plugin system, facilitating extensibility and customization to meet diverse user needs. The system architecture, practical use cases, and evaluation results are thoroughly presented, highlighting the usability and efficiency of the proposed approach.

**Quality**
MantisTable UI is a significant advancement in the field of semantic table interpretation, offering robust functionality and flexibility. However, improvements are needed in the detailed comparison with previous versions, performance evaluation, and addressing usability concerns.

*Strengths*
Comprehensive and Modular Design:
The manuscript presents a well-organized system architecture leveraging modern technologies such as the T3 Stack (Next.js, TypeScript, Tailwind CSS) and FastAPI for backend operations. The modularity and extensibility through plugins are well explained, and the use of Docker containers for plugin isolation is technically sound and practical.

Innovative Plugin System:
The plugin-based approach allows users to extend the platform's capabilities with minimal disruption, supporting export, transformation, and data visualization plugins. This flexibility makes MantisTable UI a versatile tool for a wide range of STI tasks.

Thorough Evaluation and Usability Testing:
The authors conducted a comprehensive evaluation, employing quantitative measures (Task Success Rate) and qualitative feedback (System Usability Scale (SUS) and Think-aloud method). The inclusion of both expert and non-expert users provides depth to the usability analysis.

Open source tool for public use, fostering collaboration and community-driven enhancements.

*Areas of improvement*
Comparison and Novelty Emphasis:
While the SOTA is well-documented, the manuscript does not clearly compare MantisTable UI against the listed tools in terms of functional differences and improvements. Table 1 lacks an explanation of the criteria or "lenses" used for comparison. Providing a more structured comparison that highlights the unique features and improvements would strengthen the manuscript. Additionally, explicitly addressing how the proposed system outperforms or complements existing solutions is necessary.

Usability and Evaluation Transparency:
The usability evaluation section lacks details about the experimental setup and participant preparation. Key questions to address include:
- Did the participants receive training before the experiment?
- Were they already familiar with the MantisTable UI interface?
- Did they work individually or collaboratively?
- How were participants classified as expert or non-expert? Was this distinction predefined or emerged - during the sample analysis?
Furthermore, the reference value for the SUS score is inaccurately cited as 70. According to the literature, a SUS score of 68 is generally considered the threshold for good usability. The manuscript should accurately report this value and discuss its implications.

Performance and Quantitative Assessment:
While the usability assessment is well-documented, the manuscript lacks a quantitative evaluation of system performance, such as processing speed, resource consumption, and scalability when handling large datasets. Including performance metrics would reinforce the system’s practicality and robustness.

Addressing Limitations:
The manuscript does not discuss the limitations of MantisTable UI. Addressing current challenges, such as the limited support for API-based table import or the need for more sophisticated annotation suggestion mechanisms, would enhance transparency and set realistic expectations.

**Clarity**
The manuscript is generally well-structured and clearly outlines the motivation, architecture, and capabilities of MantisTable UI. However, readability could be improved by addressing some minor language issues and inconsistencies.

*Strengths*
Clear State of the Art (SOTA) Analysis:
The manuscript provides a well-structured overview of existing tools and clearly articulates the gap addressed by MantisTable UI. The comparison with other STI tools is detailed and comprehensive, highlighting the strengths of the proposed system.

Illustrative Visual Aids:
Screenshots and visual aids are provided to demonstrate the interface and functionalities, aiding comprehension.

*Areas for Improvement*
Clarification of Novelty and Positioning (State of the Art):
The manuscript states that MantisTable UI is a "new" tool (page 2, line 9), but it lacks a clear explanation of how it differs from the authors’ previous versions or similar tools developed by the same team. To enhance clarity, the authors should explicitly state how MantisTable UI advances beyond prior contributions like MantisTable V and MantisTable SE.
Moreover, the introduction states that no other tools are equipped with GUIs that support table annotation, yet the SOTA review lists several GUI-based tools. The manuscript should reconcile this inconsistency by emphasizing the specific novelties and features that make MantisTable UI stand out.

Terminological Clarification (STI vs. Annotation):
The manuscript extensively discusses Semantic Table Interpretation but lacks clarity in distinguishing between "interpretation" and "annotation". Providing a clear and concise definition of both terms early in the manuscript would help avoid confusion, particularly for readers less familiar with the domain.

Proofreading:
The manuscript contains several repetitions and minor language issues that should be addressed:
- The KG acronym is defined multiple times.
- The acquisition of Trifacta by Alteryx is mentioned twice in consecutive paragraphs, which appears redundant.
- The term "usable" is used where "functional" would be more appropriate (SOTA, page 3, line 40).
- Figures 5 to 9 (showcasing the GUI) suffer from low readability and could be enhanced to improve clarity and visualization quality.

**Assessment of the Data File (Long-term Stable URL for Resources)**
The authors have chosen GitHub as the primary repository for MantisTable UI and its plugin system. However, the provided data file is not well-organized and lacks a comprehensive README file. The existing README only contains license information without adequate instructions on how to use the data or replicate experiments. Including a detailed README that covers setup instructions, usage guidelines, and example outputs would greatly enhance usability.

Review #3
By Oscar Corcho submitted on 22/Mar/2025
Suggestion:
Reject
Review Comment:

Note: As this manuscript has been submitted as 'Tools and Systems Report' the review also contains the following dimensions: (1) Quality, importance, and impact of the described tool or system (convincing evidence must be provided). (2) Clarity, illustration, and readability of the describing paper, which shall convey to the reader both the capabilities and the limitations of the tool.

This paper describes the main characteristics and design decisions of the MantisTable UI Web application, which provides a way to interact and enact existing systems for Sematnic Table Interpretation (more specifically the one developed by the authors in the past, and which has participated in SemTab challenges).

It is clear from the state of the art provided by the authors that there may be a relevant need to have this type of easier-to-use tools for those practitioners that are willing to generate semantic annotations from tabular data and want to use some existing STI approach. As an OpenRefine user (and lecturer on how it works), I find these types of tools extremely relevant especially for those users that are less tech-savvy or semantic-technology-savvy, and hence a good contribution to the state of the art.

The description of the tool goes into some potentially unnecessary details on the implementation characteristics (specific frameworks used for frontends and backends, programming languages, configuration options, etc.), which may not be so relevant for an academic paper, but rather for a technical report I would suggest reviewing this and making sure that it is less relevant in future editions of the paper.

I have some concerns on the paper itself, and even on some design decisions:
The set of requirements. It is not clear where they came from. They seem to be some requirements that have been obtained by checking existing functionalities from other systems, or by considering the functionatities from STI systems, but that are not obtained systematically from a proper requirement elicitation process.
The evaluation. It is designed according to typical usability tests, what is correct. However, I have some concerns on this evaluation: the choice of tasks that are evaluated. I am missing variety in the tasks (for instance, considering tables that are hard to annotate or where the results of the annotations are not good, and how users react to them). I am missing tasks on export to RDF, or export of mappings (if this is done by the underlying STI), so that they can be used outside. I am also missing cross-comparisons with other existing tools and some of their functionatlities (e.g., OpenRefine).
I am also missing reflection from testers on functionatlities that they would have liked to see. I am also missing the results of the think aloud processes, since these are not reported in the evaluation section.
You claim that many tools are not available any more. I understand that, since they were in some cases proofs of concept or early prototypes. But have you tried to get the authors of those tools to help you in setting them up or installing them and using them?

Considering these concerns, and the fact that the scientific contribution is low (expected in a system paper, of course), but at the same time there are no other strong aspects related to the evaluation done, or the more systematic description of challenges run when constructing the tool, or usage of the tool (both by users or by plugin developers), I think that the paper is not ready to be published yet.

Minor comments:
I do not agree with paragraph 1 of the introduction when you say that tabular data is not machine readable, since indeed a CSV or alike is machine readable. I think that you wanted to say that they are not semantically interpretable.
I do not agree with the last sentence of the abstract. I think that it is unnecessary.
In section 2, when you describe T, I have some small comments: “why each column instead of some columns”. I understand that sometimes some columns will not be annotated. Entity columns are undefined when first used: to make the paper mre complete probably a definition should be provided.
There is an excessive amount of self-references (especially the first ones).

Finally, according to the guidelines: Please also assess the data file provided by the authors under “Long-term stable URL for resources”. In particular, assess

(A) whether the data file is well organized and in particular contains a README file which makes it easy for you to assess the data
- There is a very brief README that links to a website, which is not necessarily permanently archived.

(B) whether the provided resources appear to be complete for replication of experiments, and if not, why,
- Unclear description of how to replicate some of the experiments, in the repository.

(C) whether the chosen repository, if it is not GitHub, Figshare or Zenodo, is appropriate for long-term repository discoverability
- Yes. It would be good to also add the version described in Zenodo.

(4) whether the provided data artifacts are complete.
- It seems to be complete, with docker files. I have not tested it completely, anyway.