User Interaction Patterns for Linked Data

Tracking #: 3430-4644

Mariana Aguiar
Sergio Nunes
Bruno Giesteira

Responsible editor: 
Guest Editors Interactive SW 2022

Submission type: 
Full Paper
Linked Data is often still perceived as data that will only be consumed by machines, and not by humans as well. As a result, Linked Data applications still often use more traditional visualisations that come with usability issues. However, alternative user interaction approaches have been developed and evaluated, many of which have proven to be better solutions for inexperienced users. One technique to formalise and document these user interaction techniques and best practices is in the form of patterns. While many pattern collections in the literature cover user interaction problems, there are no works targeted at user interaction problems for Linked Data interfaces. A survey conducted with Linked Data researchers and developers proved this need for user interaction patterns in the community, with over 90\% of participants rating them as useful. Here, we propose a pattern collection of 20 novel user interaction patterns for Linked Data as a result of the abstraction of common problems and solutions for visualising, searching, browsing, and authoring Linked Data. The proposed patterns are the combined result of the most common problems reported by members of the community, the knowledge and experience gathered in 10 pattern mining interviews, and the recurrent approaches collected through a literature review of the solutions for user interaction with Linked Data. To validate the structure of the pattern collection, we followed the Pattern Classification method and, to evaluate the adoption and quality of each proposed pattern, we conducted a pattern adoption survey. From this evaluation, we obtained positive scores for 18 out of the 20 patterns. We believe that the proposed pattern collection can be a valid and helpful tool for developers, regardless of experience, to improve the user interfaces of their Linked Data applications.
Full PDF Version: 

Major Revision

Solicited Reviews:
Click to Expand/Collapse
Review #1
Anonymous submitted on 12/Jun/2023
Minor Revision
Review Comment:

The article is laudable in its attempt to address the long running issue of user interface design guidelines for user interaction with linked data. It does so by putting forward the case for a set of user interaction patterns specifically for LD, describes the generation of the proposed user interaction patterns and then describes feedback received so far on the proposed user interaction patterns. I believe the research described and the proposed patterns will be of interest to the semantic web and linked data community.

However a number of issues with the article need to be addressed in order to reassure the reader that the research has been conducted in a rigorous manner such that the subsequent patterns generated and proposed are well founded. This will be tagged as ‘Rigour issue’ below. Also at times the article is not convincing. This will be tagged as ‘Convincing needed'

Issues requiring careful consideration for revised article:
1. Convincing needed. The article seems to be focused on issue of user interaction for ‘inexperienced’ users. But it is not convincing as to why this focus (why wouldn’t the interaction patterns be of relevance to ‘experienced’ users?). Indeed there is no definition of ‘inexperience’ given or referenced although several exist in the literature.

2. Rigour issue. The key first paragraph in the Introduction is full of assertions without references. For example "Many of these less typical approaches have proven to be better solutions for inexperienced users regarding user interface usability.” This is not just an issue with the first section, another example in section 2 is "Nowadays, the value of a piece of software is not entirely dependent on its quality, but also on how well it is interpreted and interacted with by users.”. Please revisit all assertions and clearly reference or if it is based on the authors’ opinion, signal that explicitly in such assertions

3. Convincing needed. The article really struggles to convince the reader that there is a significant need for a specific interaction data pattern for Linked Data. The key assertion in the paragraph in section 2 (starting ‘While some of the previously presented work..’) argues that Linked Data is a ‘complex data paradigm’ without reference or explanation as to why. In fact although this doubt (as why have a specific LD set of patterns) is acknowledged to some extent by the authors throughout the article - it comes up as a user comment in the first survey, and then actually reinforced as an issue in the second survey where the patterns presented to participants (note that 50% of them seemed to agree that they did not see a need for specific LD patterns) - the authors don’t seem to pull the concern through or acknowledge the issue in the documentation of the LD user interaction patterns themselves. For example, IMHO problems such as 1, 2,5,8,9 would seem to be generic to any data pattern, not just LD. Would it not make sense to acknowledge that and empower developers draw from wider pallet of user interaction patterns from other domains, rather than argue that there is a specific LD issue at play? Certainly you could provide advice as to how to solve in a LD application.. but these are not uniquely issues due to any ‘complexity’ of LD data paradigm? Or if they are in fact due to complexity of the LD data paradigm, the authors do not convince the reader on this point. Maybe a way to solve is to tag in the tables describing the patterns those which uniquely need attention due LD data paradigm and which ones are general user interaction issues but which have suggestions/guidelines for how to solve for LD application?

4. Rigour issue. At end of section 2, reference is made to existing work on Linked Data Patterns which seems to also addres ‘consuming Linked Data’. Given its focus, I would have expected a more exhaustive description of what was proposed by this paper (if only as an opportunity to expose what is so complex about the LD data paradigm) , and why the ‘consuming’ part of that work could not simply be extended to human interaction pattern (given it already focused mainly on application patterns). Surely at the very least there must also have been something that could be learnt from the interaction patterns of the applications that might inform the human user interaction patterns design?

5. Rigour issue. Start Section 3. Not defined as to what criteria used to defined what was a relevant and what not relevant survey for study. Also headings in Table 1 not described or discussed… leading the reader to guess as to what does ‘related patterns’ mean for example.

6. Convincing issue. Section 3 does itself no favours by primarily being a laundry list of description of articles with little additonal insights, observations or critique by the authors of the journal article. Table 1 referenced above is an example of this… more descriptive than in any way insightful.

7. Rigour and Convincing issue. In general Section 5 does a good job in describing in a concise manner the activities undertaken in ‘pattern mining’. However the lack of detail in places again makes it difficult for the reader to be convinced/reassured. A major issue is that the article (from what I could see, so apologies if I have missed it) is not accompanied by a set of resources that would help the reader really appreciate what was exactly asked of participants in the surveys, how the topic introduced to them, how pitched etc. Seeing the materials would help reassure the reader on such things as: the pitch of the introduction was appropriate; the participant could have no misinterpretation of the task asked of them…. this is especially important when it comes to how participants self declared their ‘experience’. How was this defined for them.. did they have examples as to what categories meant. What was stated in terms of what it meant to study LD as opposed to use LD as opposed to develop LD?

8.Issues in section 5.2
also not clear why survey 1 was designed to be answered in ‘an average of total time of 5 minutes’
Not clear why ’studying’ was considered a good category of participant to include.. surely the focus should have been on ‘using’ and ‘developing’ ?
Not clear why 3 of the open question answers from survey 1 were discarded. It would have been also helpful to allow the reader see the problems raised (the raw response data from that open question in some form of table or as an appendix/resource), as at moment reader only gets to see the author’s perspective/categorisation/interpretation of the problems.
There is also discussion about feedback gained on participants desire for guidelines and user interaction patterns… but because the reader cannot see the materials presented to participants, it is unclear how these terms were presented to the participants (or even if they were defined for them)… making it a possible concern when it comes to drawing conclusions from the feedback

9. Rigour and Convincing issue. Section 5.3. The argument is made that "Due to the time restrictions, difficulty in finding experienced people on the field available to participate, and lack of deep knowledge and strong experience on the domain by the author, we decided to conduct expert interviews as a method of pattern mining.” This does not reassure the reader that this was an appropriate choice for conducting the research. Not clear why 8 of those who volunteered to be involved were not chosen. (10 were selected). Not clear if the identification of problems and thematic analysis undertaken just by one of the authors and the others validated?

10. Rigour issue. Section 6. Not clear what alteration made to general pattern structure, and even more importantly why? Cf "We made some alterations to this proposed pattern structure to better fit the needs of the user interaction pattern”

11. Convincing issue. Section 6. As stated above I believe the reader would be more reassured if the 20 patterns were tagged such that it was clearer as to which patterns specifically exist due to the complexity of LD paradigm, as opposed to those that are general user interaction problem in other domains, but the pattern included here to give specific advice when it comes to applying to LD application
Section 7. Generally well structured and argued. However I feel it would be better called ‘evaluation’ rather than ‘ validation’… as it essentially describing a set of evaluation activities that the authors undertook to examine the proposed patterns from different perspectives. True validation IMHO will be seen when a developer or number of developers use the patterns to develop a real LD applicaiton and report back on their findings.

12. Convincing and Rigour issue. Section 7. Again not clear what definition provided to participants so they could self-declare experience. Unlike the first survey, does not seem to be breakdown of whether people declared themselves as studying, developing or using? 20 responses very small sample size. Given the importance of the survey (essentially asking participants to evaluate the proposed patterns), surprised that survey only open for such a short time

Minor issues (easy fixes)
- avoid the use of ‘isn’t’, ‘don’t’ etc. That style is very conversational in nature and not appropriate for a journal article… always expand properly to ‘is not’, ‘does not’ etc.

- The sentence beginning "Borchers also proposes…. “ has missing words at end of sentence.

- Sentence "van Welie and Traetteberg defend that a user interaction design pattern must improve …” probably should use the word ‘argue’ rather than ‘defend’

- "The results obtained for these two questions are presented in Figure 7.6”… Figure does not exist

- throughout there is issues with words getting hyphenated for no good reason, e.g. “in-terviews’ and ‘re-searchers’

-‘for the users to loose’… should be ‘ lose'

Review #2
By Roberto García submitted on 18/Jun/2023
Major Revision
Review Comment:

The work described by this paper tackles a very interesting problem, which I would like to summarize as trying to determine if there is a particular way to interact with the Semantic Web and Linked Data, especially when considering lay users or simply making it easier for more experienced users than having to deal sophisticated tools and languages like SPARQL.

The paper follows a very interesting approach to respond to this problem, to build a collection of the patterns used to interact with Linked Data. It has been constructed starting from surveys with Linked Data researchers and developers, which helped identify common problems and solutions for visualizing, searching, browsing, and authoring Linked Data.

At this point, I would like to note that it is not clear why this narrow and quite generic set of end-user tasks was selected. My impression is that this part would require further elaboration to later help better structure the set of proposed patterns, like in Figure 2. Personally, I have participated in reviews of that problem where we analyzed multiple existing proposals of Semantic Web end-user tasks and tried to consolidate them. There might be more recent studies on that topic, but I might recommend it as a starting point in any case:

Alfons Palacios, Roberto García, Marta Oliva, and Toni Granollers. 2014. Semantic Web End-User Tasks. In Proceedings of the XV International Conference on Human-Computer Interaction (Interacción '14). Association for Computing Machinery, New York, NY, USA, Article 46, 1–4.

Continuing with the pattern collection construction process, from the identified common problems and solutions, 10 pattern mining interviews were conducted to consolidate the pattern collection addressing them. At this point, the paper presents the Pattern Collection in Section 6. This is on page 12 and just takes one page, including a figure. It is true that there are tables at the end of the paper listing the identified patterns for the different tasks, but this approach lacks the detail I was expecting from this part.

More importantly, the patterns are just described, without examples illustrating them, and no discussion is made regarding their alignment with existing pattern collections for other domains like desktop or web applications, which are in fact listed in Table 1. This was one of the results I was expecting, do the Linked Data patterns deviate from those already identified in the literature for other kinds of applications?

Considering that in the MSc Thesis, this paper is mostly based on Mariana Barbosa Aguiar’s MSc Thesis (, I would recommend largely summarizing Section 5 and making references to the MSc document while focusing the paper on the part that might be more relevant to readers, the pattern collection. Patterns can be then presented in much more detail, with examples illustrating them like it is done in the MSc Thesis. To make it more original, I would suggest focusing the illustrative examples on Wikidata and using that exercise as a way of validating the proposed collection checking that all or most of them can be identified through Wikidata’s user interface.

The paper should then end with a much more elaborated conclusions section (right now it mostly replicates the text of the introduction) and go through an analysis of the proposed collection comparing it with existing pattern collections in other application domains.

Finally, I would like to note that no supplementary data has been provided to facilitate the reproducibility of the presented results, which following the journal recommendations should be provided by the authors under “Long-term stable URL for resources”. I would recommend sharing materials and data used to conduct the research, some of which are available as annexes in the PDF document for the MSc Thesis with the same title, and making them available using a persistent URI like a GitHub repository. Following this approach, it would be also possible to track the evolution of the shared materials and data thanks to changes tracking.

Review #3
Anonymous submitted on 13/Jul/2023
Major Revision
Review Comment:

This work follows an HCI methodology to devise patterns for interaction of users with linked data. The manuscript lists prominent HCI methodological works, Linked Data interaction surveys, and then performs three steps: pattern mining, evaluation of the patterns with users, and evaluating of the pattern applicability to Wikidata.

This paper takes a refreshing starting point of using HCI methodology to do interaction with LD better. I especially appreciated the analysis of the different patterns with expert users and the qualitative analysis of their opinions. IMHO, works like this paper can help to align the practices in Semantic Web with other computer science fields, and facilitate adoption.

Unfortunately, I believe that the script requires significant improvements to be publishable in the SWJ venue and the concrete special issue.

1. Soundness - the work is motivated by developing methods to facilitate inexperienced users' interaction with LD but the methodology is never evaluated with such users. Instead, the methodology is devised based on the input from expert users, and evaluated with a subset of these users. While user studies with expert users are not a bad thing, the motivation of this work is to provide human-readable interfaces for inexperienced users, which is not the target study group. In fact, I am surprised that the authors claim that this is what they hoped for, as it seems to me to be contradictory to the paper's overall goal. I was surprised to see that such user studies were not even discussed in the future work part of the paper.

Another aspect of soundness relates to the aspect of interaction, which is never clearly defined. Most of the tasks considered in this work relate to exploring LD via graphical interfaces; however, one can also interact with LD programmatically or via machine learning algorithms. Thus, it seems that the paper assumes a certain definition of interaction (which, again, may be fine, but needs to be explicitly described and motivated).

2. Clarity - the paper has some super interesting aspects but these are lost in a sea of text. The introduction is relatively short, it lacks citations (many unsupported claims, like rising interest in LD and "many of these less typical approaches have proven to be better solutions for inexperienced users..."), and it does not provide enough guidance to the reader to set their expectations for the rest of the paper. The 'methodology' is only introduced in one short paragraph in section 4, which will hardly be enough for the SW audience that is likely not too familiar with this method (having a figure and motivating the steps in depth would really help). And then, each of the following sections reads a little arbitrarily, because the expectations were not set clearly before. In addition, the presentation of the results often left me hoping for more information (e.g., when 5.1 told me that graph-based visualizations were most common, without any evidence/figure to confirm this).

Sections 2 and 3 could be super useful to motivate the work and provide background on the gap between HCI and LD practices, but instead, after reading section 2, the reader still has no good intuition about the common HCI methodologies; in section 3, I wish the authors would show how LD practices have gaps that would be filled by the methodologies in section 2. Instead, I found sections 2 and 3 to be pretty long-winded, without a clear structure or narrative. The authors do try to make a connection at the end of section 2, yet, there is no clear description on why the LD data is special and cannot be addressed by existing HCI methodologies (in fact, the rest of the paper seems to show that it can).

3. Motivation - the paper leaves the reader with many unanswered why-s. Why was this methodology chosen to derive patterns? Why not others? Why were the users experts in LD? What guided the design of the questions? Why are graph-based visualizations considered 'traditional' and other interaction modalities less so? How come certain patterns have only solutions with no associated problems?

In summary, I find this work to be novel and really promising, but refining the writing to provide guidance to the reader, explaining the motivations behind the experimental choices, and providing more crisp detail about the overall methodology is needed to make the paper more beneficial to its audience.

* The figures and tables appear after the references, which hurts readability - please move them next to where they are introduced in text.
* I suggest that the authors include related work that tries to synthesize SW workflows, such as [1,2] or analyze the gap between social and technical SW perspectives [3].

[1] Tamašauskaitė, G., & Groth, P. (2023). Defining a knowledge graph development process through a systematic review. ACM Transactions on Software Engineering and Methodology, 32(1), 1-40.
[2] Ilievski, Filip, et al. "KGTK: a toolkit for large knowledge graph manipulation and analysis." The Semantic Web–ISWC 2020: 19th International Semantic Web Conference, Athens, Greece, November 2–6, 2020, Proceedings, Part II 19. Springer International Publishing, 2020.
[3] Hogan, A. (2020). The semantic web: Two decades on. Semantic Web, 11(1), 169-185.