Using Knowledge Anchors to Facilitate User Exploration of Data Graphs

Tracking #: 1779-2991

Marwan Al-Tawil
Vania Dimitrova
Dhaval Thakker

Responsible editor: 
Krzysztof Janowicz

Submission type: 
Full Paper
This paper investigates how to support a user’s exploration through a data graph in a way leading to expanding the user’s domain knowledge. To be effective, approaches to facilitate exploration of data graphs should take into account the utility from a user’s point of view. Our work focuses on knowledge utility – how useful exploration paths through a data graph are for expanding the user’s knowledge. We propose a new exploration support mechanism underpinned by the subsumption theory for meaningful learning, which postulates that new knowledge is grasped by starting from familiar entities in the data graph which serve as knowledge anchors from where links to new knowledge are made. A core algorithmic component for adopting the subsumption theory for generating exploration paths is the automatic identification of knowledge anchors in a data graph (KADG). Several metrics for identifying KADG and the corresponding algorithms for implementation have been developed and evaluated against human cognitive structures. A subsumption algorithm which utilises KADG for generating exploration paths for knowledge expansion is presented and applied in the context of a data browser in a music domain. The resultant exploration paths are evaluated in a controlled user study to examine whether they increase the users’ knowledge as compared to free exploration. The findings show that exploration paths using knowledge anchors and subsumption lead to significantly higher increase in the users’ conceptual knowledge. The approach can be adopted in applications providing data graph exploration to facilitate learning and sensemaking of layman users who are not fully familiar with the domain presented in the data graph.
Full PDF Version: 

Major Revision

Solicited Reviews:
Click to Expand/Collapse
Review #1
By Bo Yan submitted on 20/Feb/2018
Minor Revision
Review Comment:

This paper proposes to facilitate the exploration of linked data graphs by using anchor knowledge to create exploration paths along the subsumption relationship in the graph. The authors first provide a solid theoretical foundation of their approach from a human cognitive perspective using knowledge utility and basic level object theories. Afterwards, they review related work, pointing out potential issues and raising their research questions. They develop metrics, algorithms, and methods for identifying knowledge anchors, human basic level objects, and exploration paths. The experiment result shows significant improvement of their method in helping users explore the data graphs.

Originality: The idea of using knowledge anchors and human basic level objects is original compared with similar works for linked data graph exploration. However, the methods and algorithms implemented in the paper are primarily derived from existing approaches.

Significance of result: the result is significant using their approach

Presentation: The paper is well structured and easy to read. There are still many grammar and spelling issues. I suggest the authors thoroughly proof-read the paper again and fix these issues.

The upside of this paper:
1. The idea is original. The authors conduct a comprehensive literature review on related work including visualization, text-based semantic browser, identifying key entities in graphs, etc. Their work seems to complement existing works from a more cognitive-oriented perspective.
2. They provide detailed definitions of terms and algorithm of their methods as well as the context in which they apply basic level object theory and subsumption theory for meaningful learning in this study.
3. The experiment evaluation is well designed, including the experiment conditions, participant selecting and explanation, and methodology.

The downside of this paper:
1. Although the idea is novel, the methods they use are not original. Existing methods are used to fit their study, sometimes in a way that it feels like they are just inventing new fancy names of existing methods. I suggest the authors include more rigorous methods that more naturally combine the theory they propose and the existing algorithms they use.
2. A lot of heuristics are used, e.g. in section 6.3.2 hybridization of algorithms. The algorithms they use are rather simple and a lot of high-level rule-based tricks are used. I wonder how scalable and extensible their approach would be since the hierarchies and depths are quite different in different data graphs. The authors also point out this potential issues in Section 9.1.1 in the paper.
3. In section 6.3.2, the authors mention that the 3 homogeneity metrics have the same value. I'm wondering if this means that the measures they use are insufficient to capture the homogeneities. Maybe more complex metrics can be used? In the same section, they use Precision and recall in table 3. Would you consider using the harmonic mean of precision and recall, namely F scores? What about other metrics such as mean reciprocal rank, Normalized Discounted Cumulative Gain, etc?
4. In section 7.2, the authors mention that they use only one semantic similarity metric. While the authors argue that they need a hierarchy based semantic similarity metric, there are plenty other methods besides the one they use in the paper, for example, another edge-based approach is proposed by Leakcock & Chodorow in [1]. In addition, information content based approaches sometimes also apply to hierarchical data, for example, models proposed by Lin [2] and Jiang & Conrath [3], and the information content can be calculated using methods proposed by Sánchez et al. [4] and Seco et al. [25]. I suggest the authors explore more options when calculating the semantic similarity. It would be interesting and important to see if results provided by different semantic similarity metrics concord with each other.
5. For the measuring knowledge utility part in section 8.2, in Q2, the authors use questions that specifically ask about the categorical information. However, the data exploration path is also designed to help users explore the category information, so I'm wondering if this test is fair since random exploration in the control group does not have to explore on the category information. Is there a better way to design the questions and quantify the difference?

Typos and minor issues:
1. section 2.1.1, "their primarily focus on helping layman users...", should be "they primarily focus..."
2. section 2.2, "logs of keywords and Web pages previously entered visited...", should be "...entered and visited..."
3. section 5.2.1, "After tha, for an entity v, all members...", typo
4. section 8, "the task template in Table was designed...", missing table number
5. section 8.3.2, "For instance, on participant indicated his...", typo "one"
6. section 9.1.2, "The derived human BLO set is depends on what...", should be "...set depends on..."

There are many other typos and minor issues, I suggest the authors thoroughly go through the text again and fix these issues.

[1] Claudia Leacock and Martin Chodorow. 1998. Combining local context and WordNet similarity for word sense identification. WordNet: An electronic lexical database 49, 2 (1998), 265–283.
[2] Dekang Lin et al. 1998. An information-theoretic definition of similarity.. In Icml, Vol. 98. 296–304.
[3] Jay J Jiang and David W Conrath. 1997. Semantic similarity based on corpus statistics and lexical taxonomy. arXiv preprint cmp-lg/9709008 (1997).
[4] David Sánchez, Montserrat Batet, and David Isern. 2011. Ontology-based information content computation. Knowledge-Based Systems 24, 2 (2011), 297–303.
[5] Nuno Seco, Tony Veale, and Jer Hayes. 2004. An intrinsic information content metric for semantic similarity in WordNet. In Proceedings of the 16th European conference on artificial intelligence. IOS Press, 1089–1090.

Review #2
By Simon Scheider submitted on 12/Mar/2018
Major Revision
Review Comment:

In this article, the authors propose a graph exploration technique which is based on formalizing Rosch's basic level concepts (BLO) and subsumption theory to automatically generate exploration paths which demonstrably increase knowledge utility of participants, starting from so called knowledge anchors. To the this end, the authors performed two empirical studies for testing their algorithmic approach, one about the identification of knowledge anchors in a hierarchical graph, and another for measuring how exploration paths making use of thes anchors increase the knowledge of participants about music instruments. Both empirical studies were conducted as online surveys with randomized controls.

In general, I think this is a very valuable study (original and with significant results), operationalizing a famous cognitive science approach with algorithmic means and furthermore also quite extensively evaluated. The chosen approach made a lot of sense to me, and the provided arguments are convincing (mostly). The main difficulty I have with this article is the way it is written up. Especially the algorithmic parts of the article are often verbose and at the same time partially unclear or unnecessarily complicated to read (see below for examples). The text seems written up in a rush with many grammatical errors and a writing style that is sometimes difficult to follow. Finally, the text is way too long. It seems to be a summary of a PhD, and I recommend the authors to consider either serious shortening and cleaning from unnecessary detail (there is a lot of redundancy in the text), or to divide the article in two. Why not cut the article along its two inherent challenges, one about identifying BLO, the other about evaluating exploration paths? However, also in this case, the quality of the text needs to improve a lot to be acceptable.

Here is some more detailed feedback:

- intro: Basic level entities and Rosch's ideas should be explained already in the introduction, to motivate research better. Perhaps with an example from the study. Also, it remains unclear why KADG and BLODG are different formal concepts, if one is only used to evaluate the other? If BLO is an independent contribution, then it needs motivation. For example, one might say that Measuring BLO in user tests is itself an operational challenge. If it is a contribution in terms of formalization, then it should be used for more than only evluation of another formal concept. Say more clearly to what extent this article is different from preceding conference articles. It sometimes reads like a huge summary of the articles or the PhD, while it should add genuine content to be valid as a self-standing journal article.

- Related work: This is a readable and very extensive overview. I was wondering how this discussion of knowledge utility links ''fitness for use/purpose'' literature, as it is the declared goal of this research to increase the usability of graphs. If not, better distinguish beneficial knowledge exploration from fitness for use.

- Section 5: Here I see a number of severe problems. First, the theoretical distinction between distinctiveness dimensions and homogeneity dimensions remains fuzzy, not only in the text of 5.1, but also in the definition of formal concepts. Actually, one of the distinctiveness measures (CAC) (second factor) seems nothing else than a homogeneity measure: it measures the ratio of overlap of an attribute with categories, and so the higher this value is, the more categories share a common attribute. Second, the way how formal concepts are defined in section 5 should be clarified a lot. For one, same symbols are used for different formal entities, which is annoying for a reader. For example, the symbol "e" sometimes stands for an edge label (an rdf property), and sometimes for an RDF subject (e.g. in Fig. 3). Furthermore, the presentation of FCA seems partially confused (B' = the set of objects having attributes belonging to B (not A!), and similarly, A' = the set of attributes of objects belonging to A (not B)). Furthermore, the formal concept in the example of table 2 should include {r1, r3, r5}, not r2 as is now. Finally, the chosen symbology reverses the order that is normally used in FCA: A is normally a set of objects, and B is a set of attributes, see e.g. All this makes it unnecessarily hard for a reader, and should be improved. In addition, it remains unclear whether FCA was used in the study later? If not, it should be thrown out, to shorten the paper.
It should furthermore be explained why distinctiveness measures are just summed up (without normalization), while this is done in case of homogeneity. Also, the intention and range of possible values of CU should be better explained with examples.

- Section 6: 6.1 is sometimes ungrammatical (e.g. "we need to constrain ... upon the data graph"). I was wondering whether participants used predefined labels or entered free text in 6.3? Should be clarified. It remains unclear in 6.1 why the two strategies were chosen in the first place. The normalization in 6.3 needs to go to section 5 since it is part of the method, not the evaluation (see above).

- In section 7, an introductory scenario would help the reader to understand the task. Where e.g. does the first entity of an exploration path come from?

- Section 8: Table 5 is not properly referenced in text. "observed by the author" implies a single author, yet, there are 3. Figures 9, 10 have largely invisible labels.

Review #3
Anonymous submitted on 16/Apr/2018
Major Revision
Review Comment:

The manuscript describes a method to support user exploration in the Music domain. The method presented is supported by well described theoretical foundations. Overall, the paper is well-written.

The first issue I see is a lack of description of the data used. The authors say they used Dbpedia, linked to Jamendo and other datasets, but never provide statistics about the data they use. This is a rather important

The related work is missing some important references, especially regarding sub-section 2.3.
The authors missed some founding works about Linked Data paths, as [1].
The authors state "While several approaches address the problem of users’ exploration through data graphs, of them aims at providing layman users with exploration paths to help such users to expand their domain knowledge [...]". Work such as [2], although the final goal might be different, do try to do the same.
Also, several other works in Linked Data exploratory search have been performed, such as [3,4].

The section named Evaluating KAdg against Human BLOdg (Section 6) mixes up parts of related work together with motivation for adopting a specific evaluation method and the evaluation method itself.
Besides, this section contains a sub-section (6.3) named in the same way, which confused me a bit. This sub-section is not well composed too. First, participants in the user study are very quickly described. Normally, there should be a table describing several demographics, e.g. gender and nationality, not only age and job. Second, the method part contains data description, strategies to select the data for the surveys and partial description of the survey. Finally, the Free-naming task section describes more in details the survey and also part of the data selection.
Sub-section 6.3.2. present Quantitative analysis but it is not supported by tables and is difficult to understand the results reported. Also, it is not clear on how much data has been collected through the user study: did every participant named every image? How many participants wrote the right answer? This information should be reported.
Hence, Section 6 needs a re-organization of the content to ease the readiness of the paper. I understand this user study is less central in the paper, yet it allows to take important decisions for the design of the main experiment, hence it should be properly described.

I see similar issues in the presentation of Section 8, although it presents a much more improved structure.

Minor comment:
Fig. 11-12-13-14: is black-white printed it is difficult to disambiguate between the two columns. The difference between them should be more prominent.

[1] V. Presutti, L. Aroyo, A. Adamou, B. Schopman, A. Gangemi, and G. Schreiber. Extracting Core Knowledge from Linked Data. In COLD2011, 2011.
[2] Valentina Maccatrozzo, Manon Terstall, Lora Aroyo, Guus Schreiber. Serendipity In Recommendations via User Perceptions. IUI 2017: 35-44
[3] Marie, N., Gandon, F., Ribière, M., and Rodio, F. Discovery hub: On-the-fly linked data exploratory search. In Proceedings of the 9th International Conference on Semantic Systems, I-SEMANTICS ’13, ACM (New York, NY, USA, 2013), 17–24.
[4] Waitelonis, J., and Sack, H. Towards exploratory video search using linked data. Multimedia Tools and Applications 59, 2 (2012), 645–672.