Interactions in learning-videos and context
This study uses the web based information system named LIVE to enrich learning-videos with different methods of interaction and communication. LIVE offers the following methods of interaction at both types of videos, on-demand and live broadcasting:
-
Simple questions
-
Solve CAPTCHAs
-
Ask teacher
-
Text-based questions
-
the teacher could ask text-based-questions to the students
-
at a live broadcasting of a lecture he can ask a question instantly by entering it in a text box
-
at an on-demand video he can place the question at a specific position before releasing the video
-
Multiple choice question
-
Report technical problem
Because LIVE is only available for registered and authenticated users there are three different types of users, namely students, teachers and researchers. The students are only able to watch the videos and to participate to the interactions. The screenshot in Fig. 1 shows LIVE while playing a learning-video (1) and the right sidebar (2) provides some control elements to invoke interactions (e.g. asking a question to the teacher). If an interaction occurs, the video is paused and it is not possible to resume playing until the user reacts to the interaction (see Fig. 2). In this case this means that the presented true-false question has to be answered (Wachtler and Ebner 2014b).
In comparison to the students the teachers are additionally able to create interactive videos and to analyze the performance of the students. During the process of creation the teacher could select a video from various sources (e.g. Youtube) and enrich it with interactions by selecting the methods to offer. For instance it is possible to add questions at pre-defined positions in the video. This is done by selecting the position in the video and by using a dialog to add the question there (Wachtler and Ebner 2014b).
The analysis of the performance of the students consists of two parts. At first there is a detailed recording of the watched time-spans to point out at which time a student watched which part of the video (Wachtler and Ebner 2014a). As an overview the timeline analysis draws a chart to indicate the number of users (green) and the number of views (red) across the video (see Fig. 3). This could be used to identify the most interesting parts of the video. To get more details it is possible to view a timeline for each student too. This is shown by Fig. 4. It can be seen that a red bar marks each watched part of the video in the timeline. If such a bar is hovered with the mouse the exact date and time is displayed in relative and absolute values. The second part of the analysis are the results of the questions asked during the video. All questions are listed with the answers of the students. Furthermore the correctness of the questions is displayed. It is clear that this is only possible automatically with multiple choice questions and not with text-based questions. For the latter it is required to analyze them manually. As an example Fig. 5 shows the analysis of the multiple choice questions. At the top there is the number of correct/wrong answers and below that the individual performance of each student is listed (Ebner et al. 2013; Wachtler and Ebner 2014b).
Finally there are the users of a group called researches. Members of this group are allowed to download all recorded data as a spreadsheet. This includes the following items:
-
watched timespans of each student
-
the number of users and views per second
-
answers to the different types of questions
In addition there are some lists containing the names of the videos or the texts of the questions. These lists are needed for crossreferencing because the downloads mentioned above are only stating the IDs of the videos or the questions.
The focus of the current study lies on the distribution of the questions at pre-defined positions. This is done because it is important to know where to place these questions so that they are supportive to the attention of the students.
Case study
In the course of this study, we are investigating the effects of learning-videos on the learner’s success. The clear focus will be on the interactive component of the videos while the position of testing questions within the videos plays a major part. As mentioned above the following questions are explored: Does the time of occurrence of the first question influence its answer’s correctness rate? The hypothesis of Wachtler and Ebner (2015), which claims that the first answer has a higher trend of being wrong than the following ones (Lazy Start), will be examined carefully. Moreover, a close look at a possible relation between the length of breaks between questions and the correctness of their answers (Tight-Placed Errors) will be taken.
Moreover, there will be an outlook on the long term success of the study in this manuscript. It is measured with the results of a test which took place after half of the videos had been watched. In order to enable comparability, the very same test was given to students from another class who had been taught in a traditional manner. Both classes have been at a comparable level before the study which is a necessary requirement for speaking of possible comparability.
The test itself was constructed by a teacher who was neither involved in the production of the videos nor in the traditional teaching of the second class. Thus, the exercises of the test were not created by a biased person who may have influenced the outcome in a certain direction by inserting similar issues as in the teaching process. So, both classes had the same initial position for this test.
Traditional teaching, so the way the class which does not watch the learning videos is taught, means teacher-centered lessons where the pupils receive some direct input and are mostly actively working at home. Should – or rather as soon as – they encounter difficulties while solving problems, there is nobody there to give them a hint on how to resolve problems.
Study environment
The survey was conducted in the subject of mathematics with a class of an academic high school (BG Klusemann) which has an emphasis on STEM (Science-Technology-Engineering-Math). The vast majority of the 20 students of this class are between 16 or 17 years of age. Furthermore, the attendance was compulsory. All the videos share the main subject of differential calculus. Fifteen videos were produced for the study while only seven have been used in class so far at the time of the writing process of this paper. They cover all the required topics from the Austrian curriculum regarding differential calculus: monotonicity, maxima and minima, inflection points, saddle points, finding polynomial functions and the graphical construction of derivatives.
The learning videos are playing an important role in another study which deals with the concept of flipped classroom (Loviscach 2013). One can already assume from the concept’s name what this flipping of the classroom could mean: what is done at school in traditional teaching becomes what is done at home and vice versa. So, the input phase – watching the videos – is outsourced from the classroom and exercises, which used to be homework, are shifted into classes.
In order to enable interactivity features in the videos, they are embedded in the platform which is described in Section Interactions in learning-videos and context. The format of the questions which pop up while watching the videos ranges from open questions over true-false questions to multiple choice questions. They resemble the kind of questions which are used in combination with ARS (Camuka and Peez 2014). Their application can be divided into testing theoretical knowledge for true-false questions (see Fig. 6) and multiple choice questions (see Fig. 7) and testing practical understanding for open questions (see Fig. 8) in the majority of cases. Due to a need for testing theoretical knowledge for most cases, multiple choice questions outnumber the other formats.
Comparability of the results can of course only be achieved if there is a balance in the complexity of the covered topics and the questions asked among all videos. Numerous individual topics which are naturally regarded differently in difficulty by pupils are contained in the collection of videos. Therefore, it has been attempted to distribute typically challenging topics to all videos in equal measure, respectively to compensate rather easy with rather tough subjects. For instance, the usually demanding topic of functions and their behavior at infinity, which requires abstract thinking, has been compensated with a video about a topic the pupils have already been confronted with in a previous academic year, namely the principles of extrema. Moreover, different videos which have a certain interval length (see Section Setting of the questions) in common have been compensated in terms of complexity among themselves wherever possible.
Setting of the questions
For the benefit of the learning success, the videos have been designed to be of minimal length (Bergmann and Sams 2012). The average duration of approximately twelve minutes for each video does not appropriately match the proposed length of intervals between the interactive questions (Wachtler and Ebner 2015). The recommendations from aforesaid paper had to be adapted.
The approach of setting a periodical interval length between occurring questions for each video can be attributed to the recommendations for ARS from Martyn (2007) as well. Therefore, it has already been well tested in a similar setting.
Interval lengths of 2, 4, 6 and 8 minutes, therefore step sizes of two minutes, have been recommended. The videos used in this study are only approximately 12 minutes long, while these in the study from Wachtler and Ebner (2015) are about 90 minutes long. Consequently, due to total video lengths of about one eighth in comparison, the choices of interval lengths are drastically shortened. Reducing the interval lengths to exactly one eighth would lead to an immensely high frequency of occurring interactive questions. Thus, a compromise of setting the minimal interval between occurring questions to 90 seconds and increasing them by the step size of 30 seconds leading to the final intervals of 1.5 minutes, 2 minutes, 2.5 minutes and 3 minutes was made. Eleven videos are taken into account for this study which feature the following distribution of interval lengths:
-
1.5 minutes: used in 2 videos
-
2 minutes: used in 2 videos
-
2.5 minutes: used in 3 videos
-
3 minutes: used in 4 videos.
In terms of the time of the first question pop-up, some adaptions were necessary as well: they first appear after either one, two, three or four minutes. For the eleven videos that are taken into account, the following distribution of times of the first occurring question were chosen:
-
after 1 minute: used in 3 videos
-
after 2 minutes: used in 3 videos
-
after 3 minutes: used in 3 videos
-
after 4 minutes: used in 2 videos.