Skip to main content

Video annotation and analytics in CourseMapper

Abstract

Over the past few years there has been an increasing interest to investigate the potential of Video-Based Learning (VBL) as a result of new forms of online education, such as flipped classrooms and Massive Open Online Courses (MOOCs) in order to engage learners in a self-organized and networked learning experience. However, current VBL approaches suffer from several limitations. These include the focus on the traditional teacher-centered model, the lack of human interaction, the lack of interactivity around the video content, lack of personalization, as well as assessment and feedback. In this paper, we investigate the effective design of VBL environments and present the design, implementation, and evaluation details of CourseMapper as a mind map-based collaborative video annotation and analytics platform that enables learners’ collaboration and interaction around a video lecture. Thereby, we focus on the application of learning analytics mainly from a learner perspective to support self-organized and networked learning through personalization of the learning environment, monitoring of the learning process, awareness, self-reflection, motivation, and feedback.

Introduction

There is a wide agreement among Technology-Enhanced Learning (TEL) researchers that Video-Based Learning (VBL) represents an effective learning method that can replace or enhance traditional classroom-based and teacher-led learning approaches (Yousef et al. 2014a). Using videos can lead to better learning outcomes (Zhang et al. 2006). Videos can help students by visualizing how something works (Colasante 2011a) and show information and details which are difficult to explain by text or static photos (Sherin and van Es 2009). In addition, videos can attract students’ attention, thus motivating them and engaging them to increase their collaboration.

In the past few years, the proliferation of new open VBL models, such as flipped classrooms and Massive Open Online Courses (MOOCs) has changed the TEL landscape by providing more opportunities for learners than ever before. The flipped classroom is an instance of the VBL model that enables teachers and learners to spend more time in discussing only difficulties, problems, and practical aspects of the learning course (Montazemi 2006; Tucker 2012). In flipped classrooms, learners watch video lectures as homework. The class is then an active learning session where the teacher use case studies, labs, games, simulations, or experiments to discuss the concepts presented in the video lecture (Calandra et al. 2006). MOOCs present another emerging branch of VBL that is gaining interest in the TEL community. MOOCs are courses aiming at large-scale interactions among participants around the globe regardless of their location, age, income, ideology, and level of education, without any entry requirements or course fees (Yousef et al. 2014b). MOOCs can be roughly classified in two groups. On the one hand there are xMOOCs (Extension MOOC). Although they gained a lot of attention they can be seen as a replication of traditional learning management systems (LMS) at a larger scale. Still they are closed, centralized, structured, and teacher-centered courses that emphasize video lectures and assignments. In xMOOCs all services available are predetermined and offered within the platform itself. On the other hand there is the contrasting idea of cMOOCs (connectivist MOOC) combining MOOCs with the concept of Personal Learning Environment (PLE). In contrast to xMOOCs, cMOOCs are open-ended, distributed, networked, and learner-directed learning environments where the learning services are not predetermined, and most activities take place outside the platform (Chatti et al. 2014; Daniel 2012; Siemens 2013).

Despite their popularity, current VBL approaches (such as flipped classrooms and MOOCs) suffer from several limitations. In this paper, we highlight some limitations and discuss challenges that have to be addressed to ensure an effective VBL experience. In light of these challenges, we present the design, implementation, and evaluation details of the collaborative video annotation and analytics platform CourseMapper.

VBL limitations and challenges

Flipped classrooms and MOOCs have unique features that make them effective TEL approaches that offer a new perspective for VBL. The flipped classroom model has been successfully applied in the higher education context. The flipped classroom approach involves a range of advantages for learners including student-centered learning, scaffolding, and flexibility (Yousef et al. 2014a). The flipped classroom model, however, suffers from several limitations. These include:

  • Class structure: Most of the studies that examined flipped classrooms mentioned that the separation between in-class and out-of-class activities is not clearly understood by the learners.

  • Lack of motivation: Learners with low motivation do not pay full attention to out-class activities, such as watching videos, reading materials, or completing assignments at home (Wallace 2013).

  • Assessment and feedback: The flipped classroom model emphasizes the role of problem-based learning and project-based learning. This requires creative assessment methods beyond traditional multiple-choice examinations in order to effectively gauge the learners performance in both individual tasks and group projects (Bishop and Verleger 2013; Wilson 2013).

Much has been written on MOOCs about their design, effectiveness, case studies, and the ability to provide opportunities for exploring new pedagogical strategies and business models in higher education. Despite their popularity and the large scale participation, a variety of concerns and criticism in the use of MOOCs have been raised. These include:

  • Lack of human interaction: The problem is that participants are effectively cut off from face-to-face interaction during the learning process in MOOCs (Schulmeister 2014). Thus, there is a need for solutions to foster interaction and communication between MOOC participants by bringing together face-to-face interactions and online learning activities.

  • Lack of interactivity around the video content: Video lectures are the primary learning resources used in MOOCs. However, one of the most crucial issues with current MOOCs is the lack of interactivity between learners and the video content. Several studies on the nature of MOOCs address the linear structure of video lectures to present knowledge to learners in a passive way (Yousef et al. 2014b). Therefore, there is a need for new design techniques to increase the interactivity around video lectures in MOOCs.

  • Teacher-centered learning: Most of existing MOOCs are especially interesting as a source of high quality content including video lectures, testing, and basic forms of collaboration. However, the initial vision of MOOCs that aims at breaking down obstacles to education for anyone, anywhere and at any time is far away from the reality. In fact, most MOOC implementations so far still follow a top-down, controlled, teacher-centered, and centralized learning model. Endeavors to implement bottom-up, student-centered, really open, and distributed forms of MOOCs are exceptions rather than the rule (Yousef et al. 2014b).

  • Drop-out rates: MOOCs are facing high drop-out rates in average of 95 % of course participants. One of the potential reasons for that is the complexity and diversity of MOOC participants perspectives. This diversity is not only related to the cultural and demographic attributes, but it also considers the diverse motives and perspectives when enrolled in MOOCs. This requires an understanding of the different patterns of MOOCs participants and their perspectives when participating in MOOCs (Yousef et al. 2015a).

  • Lack of personalization: MOOCs house a wide range of participants with diverse interests and needs. Current MOOCs, however, still follow a one-size-fits-all approach that does not take this diversity into account. In order to achieve an effective MOOC experience, it is important to design personalized learning environments that meet the different needs of MOOC participants.

  • Assessment and Feedback: one of the biggest challenges facing MOOCs is how to assess the learners performance in a massive learning environment beyond traditional automated assessment methods. Thus, there is a need for alternative assessment methods that provide effective, timely, accurate, and meaningful feedback to MOOC participants about their learning experience.

These limitations raise some serious concerns on what role VBL should play, or how they should fit into the education landscape as an alternative model of teaching and learning and a substantial supplement. On the way to overcome the limitations of the flipped classroom and MOOC models outlined above, VBL require key stakeholders to address two major challenges:

  • Networking: It is crucial to provide a VBL environment that fosters collaborative knowledge creation and supports the continuous creation of a personal knowledge network (PKN) (Chatti 2010; Chatti et al. 2012a). Thus, there is a need to shift away from traditional VBL environments where learners are limited to watching video content passively towards more dynamic environments that support participants to be actively involved in networked learning experiences.

  • Personalization: It is important to put the learner at the center of the learning process for an effective VBL experience. The challenge here is how to support personalized leaning in an open and networked learning environments and how to provide learning opportunities that meet the different needs of the MOOC participants.

Providing a networked and personalized VBL experience is a highly challenging task. Due to the massive nature of emerging VBL environments, the amount of learning activities (e.g. forum posts, comments, assessment) might become very large or too complex to be tracked by the course participants (Arnold and Pistilli 2012; Blikstein 2011). Moreover, it is difficult to provide personal feedback to a massive number of learners (Mackness et al. 2010). Therefore, there is a need for effective methods that enable to track learners activities and extract conclusions about the learning process in order to support personalized and networked VBL. This is where the emerging field of Learning Analytics (LA) can play a crucial role in supporting an effective VBL experience. Generally, LA deals with the development of methods that harness educational data sets to support the learning process. LA can provide great support to learners in their VBL experience. LA that focuses on the perspectives of learners can help to form the basis for effective personalized VBL, through the support of monitoring, awareness, self-reflection, motivation, and feedback processes. Combining LA with methods of information visualization (Visual Learning Analytics) facilitates the interpretation and the analysis of the educational data (Chatti et al. 2014).

In this paper, we address the challenge of achieving an effective networked and personalized VBL experience. We propose CourseMapper as a collaborative video annotation platform that enables learners collaboration and interaction around a video lecture, supported by visual learning analytics.

Related work

In this section, we give an overview of related work in this field of research with a focus on video annotation and analytics approaches proposed in the wide literature on VBL and MOOCs.

Video annotation

Yousef et al. (2014a) critically analyzed the current research of VBL in the last decade to build a deep understanding on what the educational benefits are and which effect VBL has on teaching and learning. The authors explored how to design effective VBL environments and noted that in addition to authoring tools for VBL content, such as lecture note synchronization and video content summarization, annotation tools are the most used design tools in the reviewed VBL literature. Video annotation refers to the additional notes added to the video, which help in searching, highlighting, analyzing, retrieving, and providing feedback, without modifying the resource itself (Khurana and Chandak 2013). It provides an easy way for discussion, reflection on the video content, and feedback (Yousef et al. 2015b). Several attempts have been made to explore the potential of video annotation methods to increase the interactivity in VBL environments for various purposes. In the following, we analyze the existing video annotations tools and summarize their applicability and limitations and point out the main differences to the video annotation tool in CourseMapper.

We selected seven video annotation systems for our analysis due to their potential of supporting collaboration in VBL environments. These include VideoAnnEx (Lin et al. 2003), the Video Interaction for Teaching and Learning (VITAL) (Preston et al. 2005), MuLVAT (Theodosiou et al. 2009), WaCTool (Motti et al. 2009), the media annotation tool (MAT) (Colasante 2011a), the Collaborative Annotation Tool (CATool) Open Sourcing Harvard University’s Collaborative Annotation Tool 2016, and the Collaborative Lecture Annotation tool (CLAS) (Risko et al. 2013).

We analyzed each system for low-level features (e.g. color, shape, annotation panel, video controls, discussion panel) as well as high-level features (e.g. object recognition, collaborative annotations, and structured organization of annotation) (Döller and Lefin 2007). A summary of the analysis results and a comparison with the CourseMapper tool are presented in Table 1.

Table 1 Video annotation comparison

The analysis shows that all tools support basic features of video annotation, such as providing annotation panel, video controls, viewing area, custom annotation markers, and external discussion tools e.g. wiki, blog, chat. Only CATool and CLAS are providing more advanced features, such as social bookmarking and collaborative discussion panels. Additionally, the lack of integration between these tools and learning management systems or MOOCs makes their usage unpractical and out of context.

As compared to these tools, CourseMapper uses a relatively new approach of representing and structuring video materials where videos are collaboratively annotated in a mind-map view. CourseMapper provides the opportunity to better organize the course content by different subjects. Moreover, annotations are updated in real-time and can be embedded inside the video. The social bookmarking, discussion threads, rating system, search engine, as well as filtering and ordering mechanisms for annotations were built into CourseMapper to support a more effective self-organized and networked VBL experience.

Video analytics

Despite the wide agreement that learning analytics (LA) can provide value in VBL, the application of LA on VBL is rather limited until now. Most of the LA studies have been done in a MOOC context and have focused on an administrative level to meet the needs of the course providers. These studies have primarily focused on addressing low completion rates, investigating learning patterns, and supporting intervention (Chatti et al. 2014). Further, only little research has been carried out to investigate the effectiveness of using LA on activities around video content.

In the following, we review the related work in the field of LA on video-based content. We use the reference model for LA proposed in (Chatti et al. 2012b). This reference model is based on four dimensions: What? kind of data does the system gather, manage, and use for the analysis, Who? is targeted by the analysis, Why? does the system analyze the collected data and How? does the system perform the analysis of the collected data. The general overview of the collected results can be seen in Table 2.

Table 2 Video analytics comparison

We begin our review by looking over the "What?" dimension of the reference model and also take a look at the experiment setting and the tool lifecycle. With the vast development of analytical tools, the standard research activities have been conducted as a controlled experiment. This is still a popular environment, where tools can be modified with such requirements, so that "noisy" results can be avoided and focus can be targeted towards specific features. Several studies used namely this experiment setting (Brooks et al. 2011; Colasante 2011b; Giannakos et al. 2015). In general, the gathered data usually comes from in-house frameworks and applications or surveys and observations conducted within the institution. And, most of the tools are not developed for reusability in third-party environments.

The video learning analytics system (VLAS) is a video analytics application designed for use in a video-assisted course (Giannakos et al. 2015). The authors have used the trace data generated by students interacting with VLAS, including their video navigation history and combined the results with student learning performance and scores gathered from system questionnaires. The system has a reusable lifecycle and it is constructed with open-access to the general public.

Pardo et al. (2015) and Gasevic et al. (2014) used data collected from traces of CLAS. CLAS is a Web-based system for annotating video content that also includes a learning analytics component to support self-regulated learning (Mirriahi and Dawson 2013). Both experiments were conducted in a natural environment. However, the first study used trace data collected from MSLQ tool, midterm scores, number of annotations and covariates derived from MSLQ and SPQ questionnaires as additional data sources. In contrast, the second research included assignment of participants to two different experimental conditions, annotation counts, and LIWC special variables for linguistic analysis.

The study in (Brooks et al. 2011) was also conducted in a controlled environment. The authors used the "Recollect" tool event monitor trace data, interactions of users with player, events collected from player’s "heartbeat" mechanism, student questionnaires as an input source. Guo et al. (2014) provided a retrospective study that used edX trace data, interviews with edX staff, page navigation, video interactions and submitting a problem for grading as sources of data.

CourseMapper uses traces collected from students’ interaction around the video content (What?). The LA component of CourseMapper was designed with the general idea of reuse. Therefore, it is not limited to the research environment and can be applied in both natural or controlled experiments. To note that in a long-term usage of CourseMapper, the collected data within its database can be used to support retrospective studies.

Next we examine the “Why?”, “How?” and “Who?” dimensions of the LA reference model. We noted that most of the studies had researchers as the main target group. Only the study in (Colasante 2011b) addressed teachers and learners as primary stakeholders. Further, most of the studies used machine learning and data mining techniques for different purposes and statistics to present the analytics results. Brooks et al. (2011) used k-means clustering to help researchers investigate students’ engagement with video recorded lectures. The methodology clustered students based on video tool access. The main objectives in this work were to support monitoring and analysis, show that analytics in learning systems can be used to provide both auditing and interventions in student learning. Data mining was also applied in (Guo et al. 2014) to see how video production decisions can affect students’ engagement. The goal of the study was to give recommendations to instructors and video producers on how to take better advantage of online video formats. Linear regression was used in (Pardo et al. 2015) to investigate the impact of video annotation usage on learning performance. And, Gasevic et al. (2014) used statistical analysis to explore the usage of video annotation tools within graded and non-graded instructional approaches.

Only two studies used information visualization methods based on simple charts, namely (Giannakos et al. 2015) to investigate relationships between interactions with video lectures, attitudes, and learning performance and (Colasante 2011b) to investigate the effectiveness of the integration of the video annotation tool MAT into a learning environment.

CourseMapper aims at fostering effective personalized learning and supporting both learners and teachers (Who?) in monitoring, awareness, self-reflection, motivation, and feedback processes in a networked VBL environment (Why?). It uses traces collected from learners’ interactions to build heatmaps reflecting the most viewed parts of the video. Moreover, it uses the start/end time of annotations to produce annotation maps that stacks and highlights the frequently annotated areas of the video (How?).

CouseMapper design

In an interesting study on the effective design of MOOCs, Yousef et al. (2014c) collected design criteria regarding the interface, organization, and collaboration in video lectures. The study revealed the importance of good organizational structure of video lectures as well as the importance of integrating collaborative tools which allow learners to discuss and search video content.

Based on the design criteria in this study, we conducted Interactive Process Interviews (IPI) with target users to determine which functionalities they are expecting from a collaborative video annotation and analytics tool (Yin 2013). These interviews involved ten students who were between the ages of 21 and 28 years and all of them had prior experience with VBL. The most important point which stands out from this IPI is that learners focus more on specific sections of the video which contain concepts that they find interesting or difficult to understand, rather than the entire video.

Based on our analysis of video annotation and analytics tools discussed in the previous section and the conducted user interviews, we derived a set of functional requirements for a platform that can support networked and personalized VBL through collaborative video annotation and analytics, as summarized below:

  • Support a clear organization of the video lectures. We opted for a mind-map view of the course that lets users organize the course topics in a map-based form where each node contains a lecture video.

  • Encourage active participation, learner interaction and collaboration through collaboration features, such as social bookmarking, discussion threads, and voting/rating mechanisms.

  • Provide collaborative video annotation features. Learners should be able to annotate sections of interest in the video and reply to each others annotations.

  • Provide a search function as well as a filtering/sorting mechanism (based e.g. on adding date, rating, or number of replies each annotation received) for the video annotations. This is crucial in massive VBL environments, such as MOOCs.

  • Provide visual learning analytics features to help learners locate most viewed and annotated parts of the video.

  • Provide users with a course analytics feature to give complete picture of all course activities.

  • Provide a course activity stream as a notification feature that can support users in tracking recent activities (i.e. likes, thread discussions, annotations, comments, new videos) in their courses.

  • Provide users with a personalized view of the course nodes where they had a contribution. This would allow users to get a quicker access to the lectures that they are interested in.

  • Provide an overview on user activities on the platform. This feature would allow users to track their activities across all courses that they are participating in and quickly navigate to their performed activities such as their annotations, likes, and threads.

  • Provide a recommendation mechanism that enables learners to discover courses and learning resources based on their interests and activities on the platform.

CourseMapper implementation

The design requirements collected above have built the basis for the implementation of CourseMapper 1. To note that in this paper, we only focus on the realization of the the first five requirements as these are related to video content. In the ensuing sections, we present the technologies used in the implementation of CourseMapper followed by a detailed description of the implemented video annotation and visual analytics modules and their underlying functionalities.

Technologies

In the server side backbone of CourseMapper lays Node.JS and Express Framework. Node.JS provides great event-driven, non-blocking I/O mode, which enables fast and scalable applications to be written in plain JavaScript (JS). Node.JS has a very steep learning curve and its default callback based programming style makes it harder for developers to write any blocking code. Express is a minimal and flexible Node.js web application framework that provides a robust set of features for web and mobile applications.

In order to provide real-time annotation updates and editing, CourseMapper has integrated Socket.IO engine. It bases the communication over WebSockets, however it does not assumes that they are enabled and will work by default. At first it establishes a connection with XHR or JSONP and then attempts to upgrade the connection. This means that users with browser, which does not support WebSocket-based connections will not have any degraded experience. Persistent login sessions are established via Passport.JS middleware, supporting multiple authentication schemas, including OAuth. Upon their choice users can select to login with their Facebook account and do not maintain one within the system.

Application data is stored inside MongoDB a cross-platform NoSQL document-oriented database. It substitutes the traditional table-based relational structure with JSON-like documents, which allows data easier and faster data integration. In order to simplify client side development and testing CourseMapper uses Angular, a framework providing modelviewcontroller (MVC) and modelviewviewmodel (MVVM) architectures, along with commonly used components.

For content playback, CourseMapper uses Videogular. It is an HTML5 video player for AngularJS. The player comes with default controls and multiple plugins, such as several scrub-bars, cue points (a way to trigger functions related to time) and many more. Videogular also significantly simplifies the way new plugins and controls can be developed, styled and integrated into it.

Components

The video annotation section workspace of CourseMapper can be seen in Fig. 1. It consists of a video player and several components that are listed below. A general note to take is that there are many other features of CourseMapper, which we will not describe in this paper in order to focus mainly on the video annotation and analytics parts of the platform.

Fig. 1
figure 1

Video annotating section overview

Annotation viewer

The annotation viewer is a system component that loads existing annotation from the server via WebSockets and reflects any changes in real-time. Each annotation is displayed in its own container and further comments can be made when the comment section is expanded, as shown in Fig. 2.

Fig. 2
figure 2

Annotation viewer

Annotation editor

The CourseMapper annotation editor allows users to create or update existing annotations. It is a user control placed within the layout of the annotation viewer and hosts editors for each field of the annotation model, such as text, start time, end time, annotation type. It is important to note that everyone can create annotation, however only moderators which are listed for the current course or annotation owners can edit and update the content of an existing annotation. A snapshot of the control can be seen in Fig. 3.

Fig. 3
figure 3

Annotation editor

Embedded note vs note

CourseMapper enables users to distinguish between two different types of annotations, namely notes and embedded notes. However, they can be mutually exchanged for a single annotation, or to be more precise an embedded note can be easily converted to a note or vice-versa.

Note

is an annotations that is bound to a specific timeframe within the video content, however it is only displayed inside the main annotation viewer control. A note inside the annotation viewer is activated and highlighted when the current player position crosses and stays in between the start/end time of the annotation. Once the player position exits this window the annotation is therefore marked as completed, it gets deactivated and visually grayed out in order to avoid disturbing the viewer’s attention further on. As an addition this behavior can be seen as two-way binding, due to the fact that if an annotation from the annotation viewer is clicked, it will transition the video player to the start time of the annotation, allowing easy navigation between important parts of the media.

Embedded note

is an annotation that possesses all features of a regular note with an addition of pointing a specific "hotzone" - an opaque rectangular which is overlaid on top of the video content. The rectangular zone’s position and size can be edited and stored as a supplement to the annotation model. Both dimensions are relative and restricted to the maximum of those of the video player’s container. This way a user can specify an important part of the content and focus views attention to it. Whenever the embedded zone gets hovered over inside the player it will display the annotation’s text (see Fig. 4). This features is of a significant use in full screen mode, when the annotations viewer and the rest of the application is not visible.

Fig. 4
figure 4

Embedded annotation in fullscreen mode

Find and order annotations

Because users can generate long lists of annotations in a MOOC context, the system provides functionality to sort annotations by alphabetical order, by author name, by time of beginning of the annotation and several others which have been planned in a near release. There is also an easy to use single search control, which performs a lookup on all possible fields of the annotation model, e.g. text, author name, start/end time, creation date. Moreover, it also finds comments to the annotations, that contain the search term in their body or their author, if this is the given search term.

AnnotationMap scrub bar

AnnotationMap is a visual learning analytics component of CourseMapper that extends the regular scrub bar, as shown in Fig. 5. It overlays stacks of annotation windows within the given timeline. It is placed in the controls panel of the video player. In order to keep the user confusion minimal and simplify the visual seeking for annotations the cue points here are displayed in opaque yellow color. The stack zones of overlapping annotation times will sharpen and brighten in a yellow nuance, notifying the viewer that this portion of the video timeline has a larger congregation of annotations and most likely contains interesting information.

Fig. 5
figure 5

AnnotationsMap scrub bar

Heatmap scrub bar

Heatmap is another visual learning analytics component of CourseMapper. Whenever a student navigates back and forward and interacts with the player he leaves his "footprint", which contributes to the overall heatmap. The Heatmap control extends normal scrub bar with a heatmap based color scheme, where the most viewed parts of video are marked with warm colors such as orange and red, neutral are shades of the yellow spectrum and less viewed parts are usually displayed with cold purple and blue colors, as depicted in Fig. 6. Based on this picture students can visually scan and easily find the most interesting areas of the video. Moreover, the Heatmap shows how many times the video has been watched.

Fig. 6
figure 6

Heatmap scrub bar

The Heatmap module consists of five parts, two on a server side and three on a client side. The server side provides common API for all clients. All received data is processed and stored on the server side, NodeJS and MongoDB work together in order to process requests as fast as possible and to support large numbers of users online. The server side provides two routes:

  • GET/get - returns data of the particular page based on request headers. It is not possible to specify page URL, this decision will be made in automatic manner on the server side.

  • POST/save - saves or update data of the particular page based on request headers.

The main task of the client side is to avoid all interaction with the structure of a host system or web site. It consists of three parts: Observer, Heatmap and Viewer. Each part has its own task, for instance, the Observer has to handle all important events in order to track user behavior. It also handles special types of events about a state of a user, like "idle" or "active". The Heatmap uses HTML5 canvas in order to represent input data using predefined colors. And finally the Viewer is a part which mostly interacts with the host system. It fetches data and embeds heatmap in content viewer. In the next sections, we discuss the implementation of these parts in more details.

Observer

The Observer class is used to collect information about how users view a content and then send the data to the server side using POST/save AJAX call. HTML5 Video provides API to get such events like play, pause, stop, seeking, etc. The Observer class subscribes to those events and listens for all actions that user makes while watching a video. Each time when a user is watching some part of a video the Observer stores start point as a value from 0 to 1. For example, if a user starts watching from the middle of a video the Observer will save new start point - 0.5. In the same way Observer stores endpoint of a watched video.

Heatmap

The Heatmap component is a basic implementation of 2D heatmaps called "simpleheat". However, instead of 2D, FootPrint implementation works in 1D space. As an input, LinearHeatmap accepts an array of values and maximum possible value. LinearHeatmap is a light implementation of linear heatmap that allows precise heatmaps configurations. The colorization algorithm works as follows:

  1. 1.

    At first LinearHeatmap generates color palette which will be used to set correct colors in draw function. This step passes only once.

  2. 2.

    LinearHeatmap builds grayscale gradient using standard canvas API. The result of the first step will be black linear gradient with different values of alpha.

  3. 3.

    Based on alpha value in each pixel LinearHeatmap applies correct color that is stored in color palette

Viewer

The main task of the Viewer class is to extend regular controls with generated heatmap. Video Viewer uses standard HTML5 player and adds an additional slider on the top part of a video. This slider based on custom HTML and CSS with canvas element inside, that is used by LinearHeatmap class to draw a heatmap. Additional slider shows "hottest" or most viewed parts of the video. At the same time, Observer class gathers data about viewed parts of the current user and each viewing of some part of a video is a contribution to the entire heatmap.

Evaluation

In the next sections, we provide the evaluation details of the video annotation and anyltics modules in CourseMapper with a focus on the Heatmap module. The main aim of the Heatmap module was to support monitoring, awareness, reflection, motivation, feedback in a networked and personalized VBL environment.

Scenario

We used CourseMapper in the eLearning course offered at RWTH Aachen University in summer semester 2015. We conducted a controlled experiment to evaluate the Heatmap module in supporting an effective networked and personalized VBL experience through the support of awareness, reflection, motivation, and feedback. We evaluated the Heatmap module as part of an exam preparation scenario. The beginning of the semester is quite flexible, because this is time for overview of lectures and first assignments. Throughout the semester, the workload is increasing and approximately 2–3 weeks before examination, students have to go through significant amount of learning materials. In the evaluation, we simulated a real exam preparation setting. The students were provided with a list of possible exam questions from the last years. They were asked to use the provided video lectures to get answers to the questions. The students were then split into two groups. The first group had to go through the video content without Heatmap.

We then conducted an evaluation of the Heatmap module in terms of usability and effectiveness. We employed an evaluation approach based on the System Usability Scale (SUS) as a general usability evaluation and a custom effectiveness questionnaire to measure whether the goals of monitoring, awareness, reflection, motivation, feedback have been achieved through the support of the Heatmap module. The questionnaire also includes questions related to user background, usage of learning materials, and user expectation of analytics on learning materials. Ten computer science students and three teachers completed the questionnaire.

User background

The first part of the questionnaire captured the participants’ backgrounds. Figure 7, shows that most students very often use online materials. The most popular materials are slides and students are able to find very quickly the right information using regular search commands. The second most popular online material are video lectures. However, the survey shows that students experience some difficulties searching for information within video content. Finding important information in a video is a hard task especially if the student has not attended the lecture. The video has no titles, images, and paragraphs, the only way to search is to rewind and keep watching. Also, students admitted that they use printed books rarely. In general, the survey results confirm that learning is increasingly happening through digital resources and that videos represent an important medium in today’s learning environments.

Fig. 7
figure 7

User background evaluation

User expectation

The second part of the questionnaire captured the expectations on the features that the users generally would like to have in an analytics tool on learning materials. The user expectation evaluation showed that most of the students want to quickly locate important parts of learning materials and to understand how other students use them. They pointed out that improvements in this direction would make the learning process more efficient and effective. On the other hand, teachers are interested in getting information on which learning materials are used more frequently and how they are used.

Usability

The third part of the questionnaire dealt with the usability of the tool based on the System Usability Scale (SUS) which is a simple, ten-item attitude Likert scale giving a global view of subjective assessments of usability (Brooke 1996). The questions are designed to capture the intuitiveness, simplicity, feedback, responsiveness, efficiency of the tool, and the steepness of the learning curve which a user must go through to successfully use the tool. Figure 8 shows the results of the usability evaluation using the SUS framework. The usability scale of the system is approximately 90, which reflects a high user satisfaction with the usability of the Heatmap module. In general, the respondents found the tool intuitive, easy to use, and easy to learn.

Fig. 8
figure 8

System usability scale evaluation

Usefulness

The fourth part of the questionnaire captured the usefulness of the tool. The usefulness evaluation consists of two parts, the first part is a questionnaire for students. This part covers questions related to dealing with information overload, monitoring, awareness, and motivation. The second part was created to evaluate the system from a teacher’s perspective and whether the Heatmap module can be used as an effective monitoring, reflection, and feedback tool.

Student perspective

Students of the first group did not use the Heatmap module while trying to answer the given exam questions. However, after the exam preparation task, we showed them their activities on the heatmap. Students of the second group used the heatmap right from the beginning. We asked students from the two groups to give their opinion on the Heatmap module as a potential LA tool that can support personalized learning in a VBL environment. As shown in Fig. 9, the majority of the respondents agreed that the tool can make the learning process more efficient and effective and that the tool has the potential to increase motivation through the monitoring of peer’s activities. Further, the respondents liked the feature that the Heatmap also provides information on how often a video has been watched, which can help them find popular videos, thus ovecoming a potential information overload problem. All respondents from the second group opined that the Heatmap helped them to find important parts of the learning materials. However, not all respondents were sure that they understood how other students use the learning materials. To note that respondents from the second group rated the capabilities of the Heatmap higher.

Fig. 9
figure 9

Usefulness evaluation - students

Teacher perspective

Figure 10 shows the result of the usefulness evaluation from a teacher’s perspective. The task for the teachers was to have a look at the results of the two student groups and to gauge whether the Heatmap can support monitoring, feedback, and reflection. The teachers agreed that the tool can help them monitor students’ activities and give a good feedback on the important/critical parts of learning materials. But not all teachers were sure that the tool can help with reflection on the quality of learning materials. The teachers, however, noted that this is due to the evaluation setting (i.e. simulation of an exam preparation phase based on predefined questions). They pointed out that the Heatmap can indeed be a powerful reflection tool if it was used throughout the whole semester.

Fig. 10
figure 10

Usefulness evaluation - teachers

Conclusion and future work

In this paper, we addressed the challenge of achieving effective networked and personalized video-based learning (VBL). We proposed CourseMapper as a collaborative video annotation platform that enables learners’ collaboration and interaction around a video lecture, supported by visual learning analytics. CourseMapper puts the learner at the center of the learning process and fosters networked learning through collaborative annotation of video learning materials. Visual learning analytics methods based on AnnotationMaps and Heatmaps were developed to achieve an effective VBL learning experience. The preliminary evaluation results revealed a user acceptance of CourseMapper as an easy to use and useful collaborative video annotation and analytics platform that has the potential to support monitoring, awareness, reflection, motivation, feedback in VBL learning environments.

While our early results are encouraging on the way to offer an effective VBL experience to learners and teachers, there are still a number of areas we would like to improve. The first, and most important next step is to improve our evaluation. We plan to perform a larger scale experiment in a real learning environment which will allow us to thoroughly evaluate our collaborative video annotation and analytics approach in CourseMapper. Our future work will also focus on the enhancement of CourseMapper with other analytics modules besides AnnotationMaps and Heatmaps. These include a course personalized view on the course mindmap, an activity stream to give notifications on activities within a course, as well as effective filtering and recommendation mechanisms.

Endnote

1 https://gomera.informatik.rwth-aachen.de:8443/.

References

  • KE Arnold, M Pistilli, in Proceedings of the 2nd International Conference on Learning Analytics and Knowledge. Course signals at purdue: using learning analytics to increase student success (ACMNew York, NY, USA, 2012), pp. 267–270.

    Chapter  Google Scholar 

  • JL Bishop, MA Verleger, in ASEE National Conference Proceedings, Atlanta, GA. The flipped classroom: A survey of the research, (2013).

  • P Blikstein, in Proceedings of the 1st International Conference on Learning Analytics and Knowledge. Using learning analytics to assess students’ behavior in open-ended programming tasks (ACMNew York, NY, USA, 2011), pp. 110–116.

    Chapter  Google Scholar 

  • J Brooke, Sus-a quick and dirty usability scale. Usability Eval. Ind.189(194), 4–7 (1996).

    Google Scholar 

  • C Brooks, CD Epp, G Logan, J Greer, in Proceedings of the 1st International Conference on Learning Analytics and Knowledge. LAK ’11. The who, what, when, and why of lecture capture (ACMNew York, NY, USA, 2011), pp. 86–92, doi:10.1145/2090116.2090128.

    Chapter  Google Scholar 

  • B Calandra, L Brantley-Dias, M Dias, Using digital video for professional development in urban schools: A preservice teacher’s experience with reflection. J. Comput. Teach. Educ.22(4), 137–145 (2006).

    Google Scholar 

  • MA Chatti, The LaaN Theory. Personalization in Technology Enhanced Learning: A Social Software Perspective (Shaker Verlag, Aachen, Germany, 2010).

    Google Scholar 

  • MA Chatti, U Schroeder, M Jarke, Laan: convergence of knowledge management and technology-enhanced learning. Learn. Technol. IEEE Trans.5(2), 177–189 (2012a).

  • MA Chatti, AL Dyckhoff, U Schroeder, H ThĂĽs, A reference model for learning analytics. Int. J. Technol. Enhanced Learn.4(5–6), 318–331 (2012b).

  • AM Chatti, V Lukarov, H ThĂĽs, A Muslim, FAM Yousef, U Wahid, C Greven, A Chakrabarti, U Schroeder, Learning analytics: Challenges and future research directions. eleed. 10(1) (2014).

  • M Colasante, Using video annotation to reflect on and evaluate physical education pre-service teaching practice. Australas. J. Educ. Technol.27(1), 66–88 (2011a).

  • M Colasante, in Proceedings of Global Learn 2011, ed. by S-M Barton, J Hedberg, and K Suzuki. Using a video annotation tool for authentic learning: A case study (Association for the Advancement of Computing in Education (AACE)Melbourne, Australia, 2011b), pp. 981–988. http://www.editlib.org/p/37287.

  • J Daniel, Making sense of moocs: Musings in a maze of myth, paradox and possibility. J. Int. Media Educ. Educ.3: (2012).

  • M Döller, N Lefin, Evaluation of available mpeg-7 annotation tools. Proc. IMEDIA. 7:, 25–32 (2007).

    Google Scholar 

  • D Gašević, N Mirriahi, S Dawson, in Proceedings of the Fourth International Conference on Learning Analytics And Knowledge. LAK ’14. Analytics of the effects of video use and instruction to support reflective learning (ACMNew York, NY, USA, 2014), pp. 123–132, doi:10.1145/2567574.2567590.

    Google Scholar 

  • MN Giannakos, K Chorianopoulos, N Chrisochoides, Making sense of video analytics: Lessons learned from clickstream interactions, attitudes, and learning outcome in a video-assisted course. Int. Rev. Res. Open Distrib. Learn.16(1) (2015).

  • PJ Guo, J Kim, R Rubin, in Proceedings of the First ACM Conference on Learning @ Scale Conference. L@S ’14. How video production affects student engagement: An empirical study of mooc videos (ACMNew York, NY, USA, 2014), pp. 41–50, doi:10.1145/2556325.2566239.

    Chapter  Google Scholar 

  • K Khurana, M Chandak, Study of various video annotation techniques. Int. J. Adv. Res. Comput. Commun. Eng.2(1), 909–914 (2013).

    Google Scholar 

  • C-Y Lin, BL Tseng, JR Smith, in IEEE International Conference on Multimedia and Expo. Videoannex: Ibm mpeg-7 annotation tool for multimedia indexing and concept learning, (2003), pp. 1–2.

  • J Mackness, S Mak, R Williams, in Proceedings of 7th International Conference on Networked Learning. The ideals and reality of participating in a mooc (University of Lancaster, 2010), pp. 266–274.

  • N Mirriahi, S Dawson, in Proceedings of the Third International Conference on Learning Analytics and Knowledge. The pairing of lecture recording data with assessment scores: a method of discovering pedagogical impact (ACMNew York, NY, USA, 2013), pp. 180–184.

    Chapter  Google Scholar 

  • AR Montazemi, The effect of video presentation in a cbt environment. J. Educ. Technol. Soc.9(4), 123–138 (2006).

    Google Scholar 

  • VG Motti, R Fagá Jr, RG Catellan, MDGC Pimentel, CA Teixeira, in Proceedings of the Seventh European Conference on European Interactive Television Conference. Collaborative synchronous video annotation via the watch-and-comment paradigm (ACMNew York, NY, USA, 2009), pp. 67–76.

    Chapter  Google Scholar 

  • Open Sourcing Harvard University’s Collaborative Annotation Tool. http://blogs.law.harvard.edu/acts/files/2012/06/handout.pdf. Accessed 30 June 2016.

  • A Pardo, N Mirriahi, S Dawson, Y Zhao, A Zhao, D Gašević, in Proceedings of the Fifth International Conference on Learning Analytics And Knowledge. LAK ’15. Identifying learning strategies associated with active use of video annotation software (ACMNew York, NY, USA, 2015), pp. 255–259, doi:10.1145/2723576.2723611.

    Google Scholar 

  • M Preston, G Campbell, H Ginsburg, P Sommer, F Moretti, in World Conference on Educational Media and Technology, vol. 2005. Developing new tools for video analysis and communication to promote critical thinking, (2005), pp. 4357–4364.

  • EF Risko, T Foulsham, S Dawson, A Kingstone, The collaborative lecture annotation system (clas): A new tool for distributed learning. Learn. Technol. IEEE Trans.6(1), 4–13 (2013).

    Article  Google Scholar 

  • R Schulmeister, The position of xmoocs in educational systems. eleed. 10: (2014).

  • MG Sherin, EA van Es, Effects of video club participation on teachers’ professional vision. J. Teach. Educ.60(1), 20–37 (2009).

    Article  Google Scholar 

  • G Siemens, Massive open online courses: Innovation in education. Open Educ. Resour.: Innov. Res. Prac.5:, 5–16 (2013).

    Google Scholar 

  • Z Theodosiou, A Kounoudes, N Tsapatsoulis, M Milis, in Artificial Neural Networks–ICANN 2009. Mulvat: A video annotation tool based on xml-dictionaries and shot clustering (Springer-VerlagBerlin Heidelberg, 2009), pp. 913–922.

    Chapter  Google Scholar 

  • B Tucker, The flipped classroom. Educ. Next. 12(1), 82–83 (2012).

    Google Scholar 

  • A Wallace, in e-Learning and e-Technologies in Education (ICEEE), 2013 Second International Conference On. Social learning platforms and the flipped classroom (IEEE, 2013), pp. 198–200.

  • SG Wilson, The flipped class a method to address the challenges of an undergraduate statistics course. Teach. Psychol.40:, 193–199 (2013).

    Article  Google Scholar 

  • RK Yin, Case Study Research: Design and Methods (Sage publications, California, USA, 2013).

    Google Scholar 

  • AMF Yousef, MA Chatti, U Schroeder, The state of video-based learning: A review and future perspectives. Int. J. Adv. Life Sci.6(3/4), 122–135 (2014a).

  • A Yousef, M Chatti, U Schroeder, M Wosnitza, H Jakobs, in Proc. CSEDU 2014 Conference. vol. 3. Moocs-a review of the state-of-the-art, (2014b), pp. 9–20.

  • AMF Yousef, MA Chatti, U Schroeder, M Wosnitza, in IEEE 14th International Conference on Advanced Learning Technologies (ICALT). What drives a successful mooc? an empirical examination of criteria to assure design quality of moocs (IEEE, 2014c), pp. 44–48.

  • AMF Yousef, MA Chatti, M Wosnitza, U Schroeder, A cluster analysis of mooc stakeholder perspectives. RUSC. Universities Knowl. Soc. J.12(1), 74–90 (2015a).

  • AMF Yousef, MA Chatti, N Danoyan, H ThĂĽs, U Schroeder, in Proceedings of the Third European MOOCs Stakeholders Summit EMOOCs. Video-mapper: A video annotation tool to support collaborative learning in moocs, (2015b), pp. 131–140.

  • D Zhang, L Zhou, RO Briggs, JF Nunamaker, Instructional video in e-learning: Assessing the impact of interactive video on learning effectiveness. Inform. Manag.43(1), 15–27 (2006).

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mohamed Amine Chatti.

Additional information

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chatti, M.A., Marinov, M., Sabov, O. et al. Video annotation and analytics in CourseMapper. Smart Learn. Environ. 3, 10 (2016). https://doi.org/10.1186/s40561-016-0035-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40561-016-0035-1

Keywords