Casual Learn: A Semantic Mobile Application for Learning Local Cultural Heritage

Tracking #: 2758-3972

Adolfo Ruiz-Calleja
Pablo García-Zarza
Guillermo Vega-Gorgojo
Miguel L. Bote-Lorenzo
Eduardo Gómez-Sánchez
Juan I. Asensio-Pérez
Sergio Serrano-Iglesias
Alejandra Martínez-Monés

Responsible editor: 
Special Issue Cultural Heritage 2021

Submission type: 
Tool/System Report
This paper presents Casual Learn, an application that proposes ubiquitous learning tasks about Cultural Heritage. Casual Learn exploits a dataset of 10,000 contextualized learning tasks that were semiautomatically generated out of open data from the Web. Casual Learn offers these tasks to learners according to their physical location. For example, it may suggest describing the characteristics of the Gothic style when passing by a Gothic Cathedral. Additionally, Casual Learn has an interactive mode where learners can geo-search the tasks available. Casual Learn has been successfully used to support three pilot studies in two secondary-school institutions. It has also been awarded by the regional government and an international research conference. This made Casual Learn to appear in several regional newspapers, radios, and TV channels.
Full PDF Version: 

Minor Revision

Solicited Reviews:
Click to Expand/Collapse
Review #1
Anonymous submitted on 12/Apr/2021
Minor Revision
Review Comment:

This paper describes a (mobile) application that based on a geolocation teaches users about cultural heritage sites (including historical buildings) by having them perform learning tasks. Based on geofencing, learners are presented with various tasks, including documenting something from the real world.

The contributions of this paper are:
* Combination of Cultural heritage and contextual learning (learning on the spot)
* Concrete mobile infrastructure including various technologies
* Concrete (validated) use cases with end-users

The main points for minor revision are:
* It is unclear how tasks are created, on the one hand this seems to be done automatically by combining several sources. On the other hand, this seems to be a manual task. By referring to an article under review it is unclear how this is accomplished.
* The tasks and site recommendation (and ranking) are handled by a commercial service. This means that is unclear how recommendation R3 is satisfied. Also, it is not motivated why an own or open source implementation was used. Furthermore, the recommender seems to take into account preferences, but is unclear how that works.
* Provide more evidence about the quality of the system. E.g. how did the teachers validated the pedagogical interest of the system. And what was the outcome.

Detailed comments:

Consider adding the word "about" in the title after Learning.
Also, the term semantic in the title is confusing, which part of the application is (about) semantic(s)?

According to [13]; Learning in ubiquitous learning environment is conducted in the interactions among three essential communicative elements: social human, object in real world, and artifact in virtual space. Can you elaborate on how these essential elements are imbedded in Casual Learn?

Can you elaborate on the semantics of the term learning? You describe ubiquitous learning, informal learning, contextualized learning, authentic learning and geolocalized learning.
Also, for the relation between learning tasks and learning activities.

In 1. describes that there is a task dataset published as Linked Open Data. In 2. you describe how this task dataset was retrieved and integrated from various sources. Where the (4) teachers involved in this process? Also, how can this be maintained?
Later in 2. you describe the process to create tasks and refer to [19] which is still under review, so it unclear how this mechanism works.

Did the task generation take into account the level of secondary-school students? in section 5 you mention that all teachers designed learning activities. Consider describing this process. Is there a task editing system?

In 1. consider to mention related (geo) systems from section 6. where you claim that there are few visualization tools.

2. described the notion of "answerType". Is there a mechanism where teachers can inspect the answers and e.g. give a rating? Also is there a multiple choice type?

2. where are points (lat,lon) used as georeference? Would polygons not give a richer experience. For example to refer to a large object, street, neighbourhood or area?

2. why is the geo reference located to the context and not also to tasks?

3. lists the requirements for the system. R3 demands that the learners preferences are to be taken into account. What kind of requirements are there? In 4.1 it is mentioned that preferences should be taken into account. Consider to elaborate on this mechanism.

4.1 contains the first mention of user rating. This seems to be important in the recommendation process. Can you elaborate on user preferences, user rating and recommendation?

4.2.consider to add the notion of geo-fench.

4.3 mentions that the source code is open source, but Recombee (the recommender server) is a commercial / closed source service. So an important part of the system is delegated to a commercial party. Consider to describe how the actual recommender works and list possible open source alternatives.

5. mentions that teachers assess the results of the experience. Do you mean that teachers had access to accomplished tasks? According to 4.1 images and videos are not stored in the answer database.

5. describes that based on a usability study, improvements are included in this version of the system. So on what version was the usability study based? Furthermore, can you specify what the issues / improvements are?

Consider to quantify a typical Casual Learn session, e.g. how long does a session take, how many tasks need to be carried out?

5. Congratulations on the prizes and attention. Consider adding the reasons behind these compliments.

5. Can you elaborate on the interviews with the users of the system?

6. Consider to split the discussion section into related work and the discussion about the (potential) capabilities, limitations of the tool and future work.

6. Did you consider to recommend more information about a cultural heritage, by presentating various links (linked data) to other related sites?

The last sentence: "Casual Learn can take this information into account to recommend tasks according to the learners’ personal interests.". Does this mean: Casual Learn can already do this, or is this future work?

Review #2
Anonymous submitted on 07/May/2021
Minor Revision
Review Comment:

This paper describes the Casual Learn system that uses open data to generate tasks related to cultural heritage sites that can be found and carried out using a mobile app. The system uses templates to generate tasks for a retrieved set of contexts.

The system described is convincing and is clearly having an impact. A number of awards for the system are mentioned. The app is available on Google Play and the authors report that it has been downloaded by 374 users. It would be useful if the authors could be provide more information on the take-up of the app. For example, how many tasks have been completed using the app and in how many contexts has the app been used? The authors also mention that the students completed a survey but no real detail is given as to the number of survey responses, the questions on the survey and descriptive statistics of the responses. This data could be used to support the higher level claims made in the paper.

The system is clearly described in the paper. The first part of the Discussion section of the paper describes related work. The second part is more typical of a conclusions section. It would be be preferable to separate out related work into its own section and provide a more conventional discussion/conclusions.

The app is publicly available and all of the code is available on Github as described in the paper.

Review #3
By Enrico Daga submitted on 16/Jul/2021
Minor Revision
Review Comment:

Casual Learn is a mobile application that allows users to perform learning tasks related to cultural heritage sites. It is developed on top of a Semantic Web stack of technologies and it has been tested and used by teachers and students in the Spanish region of Castile and Leon.
The article is well written and gives a detailed account of the semantic technologies used, although it does follow a pretty standard architecture. It would be useful to have an extended discussion of the architecture layout also considering the management of identity and user-generated content (in relation to GDPR). Are users able to edit their contributions? Are they rewarded for their achievements?

This manuscript was submitted as 'Tools and Systems Report'.

(1) Quality, importance, and impact of the described tool or system (convincing evidence must be provided).
The quality of the article and the description of the tool is good, the system is live and, although its usability is limited to the target region, it provides convincing evidence of adoption.

(2) Clarity, illustration, and readability of the describing paper, which shall convey to the reader both the capabilities and the limitations of the tool.
The description is clear and readable, although an analysis of the limitations of the tool and possible improvements should be included in the revised submission.

(3) Assess (A) whether the data file is well organized and in particular contains a README file which makes it easy for you to assess the data
The data is exposed via a SPARQL endpoint and the application is open-sourced on GitHub. Authors may link the repository to Zenodo, to provide long-term availability and DOI.
Also, the authors should also provide a public archive of the data (if public and, if not, justify why), for example on GitHub/Zenodo