Review Comment:
This paper describes a (mobile) application that based on a geolocation teaches users about cultural heritage sites (including historical buildings) by having them perform learning tasks. Based on geofencing, learners are presented with various tasks, including documenting something from the real world.
The contributions of this paper are:
* Combination of Cultural heritage and contextual learning (learning on the spot)
* Concrete mobile infrastructure including various technologies
* Concrete (validated) use cases with end-users
The main points for minor revision are:
* It is unclear how tasks are created, on the one hand this seems to be done automatically by combining several sources. On the other hand, this seems to be a manual task. By referring to an article under review it is unclear how this is accomplished.
* The tasks and site recommendation (and ranking) are handled by a commercial service. This means that is unclear how recommendation R3 is satisfied. Also, it is not motivated why an own or open source implementation was used. Furthermore, the recommender seems to take into account preferences, but is unclear how that works.
* Provide more evidence about the quality of the system. E.g. how did the teachers validated the pedagogical interest of the system. And what was the outcome.
Detailed comments:
Consider adding the word "about" in the title after Learning.
Also, the term semantic in the title is confusing, which part of the application is (about) semantic(s)?
According to [13]; Learning in ubiquitous learning environment is conducted in the interactions among three essential communicative elements: social human, object in real world, and artifact in virtual space. Can you elaborate on how these essential elements are imbedded in Casual Learn?
Can you elaborate on the semantics of the term learning? You describe ubiquitous learning, informal learning, contextualized learning, authentic learning and geolocalized learning.
Also, for the relation between learning tasks and learning activities.
In 1. describes that there is a task dataset published as Linked Open Data. In 2. you describe how this task dataset was retrieved and integrated from various sources. Where the (4) teachers involved in this process? Also, how can this be maintained?
Later in 2. you describe the process to create tasks and refer to [19] which is still under review, so it unclear how this mechanism works.
Did the task generation take into account the level of secondary-school students? in section 5 you mention that all teachers designed learning activities. Consider describing this process. Is there a task editing system?
In 1. consider to mention related (geo) systems from section 6. where you claim that there are few visualization tools.
2. described the notion of "answerType". Is there a mechanism where teachers can inspect the answers and e.g. give a rating? Also is there a multiple choice type?
2. where are points (lat,lon) used as georeference? Would polygons not give a richer experience. For example to refer to a large object, street, neighbourhood or area?
2. why is the geo reference located to the context and not also to tasks?
3. lists the requirements for the system. R3 demands that the learners preferences are to be taken into account. What kind of requirements are there? In 4.1 it is mentioned that preferences should be taken into account. Consider to elaborate on this mechanism.
4.1 contains the first mention of user rating. This seems to be important in the recommendation process. Can you elaborate on user preferences, user rating and recommendation?
4.2.consider to add the notion of geo-fench.
4.3 mentions that the source code is open source, but Recombee (the recommender server) is a commercial / closed source service. So an important part of the system is delegated to a commercial party. Consider to describe how the actual recommender works and list possible open source alternatives.
5. mentions that teachers assess the results of the experience. Do you mean that teachers had access to accomplished tasks? According to 4.1 images and videos are not stored in the answer database.
5. describes that based on a usability study, improvements are included in this version of the system. So on what version was the usability study based? Furthermore, can you specify what the issues / improvements are?
Consider to quantify a typical Casual Learn session, e.g. how long does a session take, how many tasks need to be carried out?
5. Congratulations on the prizes and attention. Consider adding the reasons behind these compliments.
5. Can you elaborate on the interviews with the users of the system?
6. Consider to split the discussion section into related work and the discussion about the (potential) capabilities, limitations of the tool and future work.
6. Did you consider to recommend more information about a cultural heritage, by presentating various links (linked data) to other related sites?
The last sentence: "Casual Learn can take this information into account to recommend tasks according to the learners’ personal interests.". Does this mean: Casual Learn can already do this, or is this future work?
|