Skip to main content

An analysis of the use and effect of questions in interactive learning-videos

Abstract

This study focuses on the positioning of interactive questions within learning videos. It is attempted to show that the position of a question’s occurrence has an impact on the correctness rate of its answer and the learning success. As part of the study, the interactive learning videos in which the questions are placed are used as teaching materials with a class. The pupils have been working with the videos for around one month and some interesting results could be obtained. It is shown that questions which are asked too early in the videos are answered incorrectly more often than other questions. This manuscript also recommends an adequate positioning of the first question in learning videos. The new hypothesis that the length of intervals between popping up questions plays a minor role at rather short learning videos is constructed in this publication. Moreover, the positive impact on the long-term learning success of the participants of learning videos is determined.

Introduction

The currently evolving trend of MOOCs (Massive Open Online Courses) leads to the usage of videos for teaching, as a consequence (Khalil and Ebner 2013; Lackner et al. 2014). This means that learning-videos are making some kind of comeback because the maxim “TV is easy and book is hard” (Salomon 1984) placed videos in a difficult position for being used for the purpose of teaching. The mentioned maxim is motivated by the fact that on the one hand the technical aspects of videos changed dramatically over the last decades and on the other hand the role of the watchers is still more or less the same. This means that videos were presented by projectors in its early days and today it is common to search for a video on the Internet and to watch it on many different (mobile) devices. However, the activity of the watchers has not changed drastically which means that they are still a passive audience.

These days videos are the most important digital media on the Internet (Lehner 2014). The quality of videos is increasing, so creators need to think about new ways of standing out (dpa 2015; Tembrink et al. 2013). One possibility is to include interactive components. According to Lehner (2014) users on the Internet are used to interactions. They do not want to watch videos passively and prefer interactive features inside a video and being challenged throughout watching it. In the best case the user can influence what is happening in the video. Through the interactive components teachers also gain new educational opportunities, e.g. interactions may enhance the pupils’ motivation and impart valuable media competences in addition to the syllabus.

In addition the quantity of information presented to students is huge. Different forms of data is presented to them by using various amounts of text, colors, and shapes. It is logical to assume that students can only process a limited number of information simultaneously (Shiffrin and Gardner 1972) and due to that most of them is filtered out centrally (Moran and Desimone 1985). Heinze et al. (1994) explained that a mechanism known as selective attention is the most important part in human learning. As a consequence it is clear that managing as well as supporting this attention enhances both, behavioral and neuronal performance (Ebner et al. 2013; Spitzer et al. 1988)

As mentioned above it seems to be obvious that the nature of a video is passive and based on this, it is clear that videos only have a consuming character. This indicates that interaction as well as communication could be considered as major influencing factors of the learning success because they are transforming passive watchers to active learners. Due to this, it is important to offer different forms of interaction during a video and to provide possibilities of communication in all forms and directions. Additionally, the interaction with the content of the video is of high importance. (Carr-Chellman and Duchastel 2000; Ebner and Holzinger 2003)

To address these influencing factors of learning success, a web-based information system named LIVE (Live Interaction in Virtual learning Environments) first introduced by Ebner et al. (2013) is developed (see Section Interactions in learning-videos and context). It provides the possibility to enrich a video with different forms of interaction. A previous study of Wachtler and Ebner (2015) observed that the approach is basically working if the distribution of the interaction is well-balanced which means that the interactions should be spread evenly across the video. This observation is based on some hypotheses. With the current study we are going to prove the accuracy of the following of these hypotheses based on short term as well as long term evaluations as suggested by Wachtler and Ebner (2015):

  • Lazy Start: the success rate of the first question is not very high

  • Tight-Placed Errors: the number of correct answers to the questions is decreasing if they are placed too tightly one after another

With other words, the research problem addressed by this study could be summarized to "analyzing the use and effects of interactive learning-videos".

As already mentioned, the used platform for interactive videos is explained after the presentation of some related work (see Section Related work). After that, the accomplished study is explained before the results are pointed out and discussed (see Sections Case study, Results and Discussion). Finally Section Outlook shows some research limitations and after that a summary sums up the main parts of this work (see Section Conclusion).

Related work

This section lists different tools for providing interactive videos as a comparison to the tool used at this study. Before that, this section presents some research work in the field of ARSs (Audience Response Systems) because the approach tries to support the attention in face to face classroom situations in a similar way.

It is valid to assume that LIVE could be compared to an ARS. Such a system offers the possibility to ask different forms of questions to the students in face to face classroom situations (Haintz et al. 2014; Tobin 2005). The students are asked to answer these questions by using a special handset or something similar. Furthermore an ARS usually offers powerful methods of analysis. It can be seen that this approach is comparable to the used information system because LIVE places questions in learning videos to transform passive watchers to active participants like an ARS in face to face classroom situations.

Many studies regarding ARSs claim that such a tool has the power to improve the attention as well as the participation of the students (Ebner 2009). The study by Stowell and Nelson (2007), for example, claims that with the help of an ARS the highest formal participation could be reached in comparison to other classroom communication methods (e.g. hand rising). This was also confirmed in a similar way by Cutrim (2008) as well as Latessa and MD (2005).

Probably the best known possibility to enrich a video with interactions is to use the built-in features of Youtube (YouTube 2016). These features are limited to textoverlays and simple polls. Furthermore the possibilities of analysis are very basic only. The tool named Zaption (Zaption 2016) offers a very wide range of possible interactions as for instance multiple choice questions. The main drawback of this tool is that the time of occurrence of the interactions is marked in the timeline of the video. This leads to the problem that the students are able to jump from interaction to interaction without really watching the video. Finally TEDEd (TEDEd 2016) is able to provide questions for videos too. Unfortunately the questions are not related to a specific position in the video. They are simply displayed together with the video and due to that it is possible to access them during the whole video.

Methodology

Interactions in learning-videos and context

This study uses the web based information system named LIVE to enrich learning-videos with different methods of interaction and communication. LIVE offers the following methods of interaction at both types of videos, on-demand and live broadcasting:

  • Simple questions

    • general questions which are not related to the content of the video

    • random and automatic

    • used to provide interactivity if there is no content-related question

  • Solve CAPTCHAs

    • a CAPTCHA is displayed in the same way and for the same reasons as the simple questions

  • Ask teacher

    • students are able to ask a question to the teacher by using an offered text box

    • the teacher could answer per e-mail or by using an offered dialog

  • Text-based questions

    • the teacher could ask text-based-questions to the students

    • at a live broadcasting of a lecture he can ask a question instantly by entering it in a text box

    • at an on-demand video he can place the question at a specific position before releasing the video

  • Multiple choice question

    • real multiple choice questions or true-false questions

    • the teacher could add these questions at pre-defined positions in the video

  • Report technical problem

    • students are able to report a technical problem

    • mainly used at live broadcastings of lectures if there are problems with the video stream

Because LIVE is only available for registered and authenticated users there are three different types of users, namely students, teachers and researchers. The students are only able to watch the videos and to participate to the interactions. The screenshot in Fig. 1 shows LIVE while playing a learning-video (1) and the right sidebar (2) provides some control elements to invoke interactions (e.g. asking a question to the teacher). If an interaction occurs, the video is paused and it is not possible to resume playing until the user reacts to the interaction (see Fig. 2). In this case this means that the presented true-false question has to be answered (Wachtler and Ebner 2014b).

Fig. 1
figure 1

LIVE while playing a video. The video is displayed on the left (1) and some control elements to invoke interactions are placed on the right (2)

Fig. 2
figure 2

Playing interrupted by an interaction. A true-false question is shown during a video

In comparison to the students the teachers are additionally able to create interactive videos and to analyze the performance of the students. During the process of creation the teacher could select a video from various sources (e.g. Youtube) and enrich it with interactions by selecting the methods to offer. For instance it is possible to add questions at pre-defined positions in the video. This is done by selecting the position in the video and by using a dialog to add the question there (Wachtler and Ebner 2014b).

The analysis of the performance of the students consists of two parts. At first there is a detailed recording of the watched time-spans to point out at which time a student watched which part of the video (Wachtler and Ebner 2014a). As an overview the timeline analysis draws a chart to indicate the number of users (green) and the number of views (red) across the video (see Fig. 3). This could be used to identify the most interesting parts of the video. To get more details it is possible to view a timeline for each student too. This is shown by Fig. 4. It can be seen that a red bar marks each watched part of the video in the timeline. If such a bar is hovered with the mouse the exact date and time is displayed in relative and absolute values. The second part of the analysis are the results of the questions asked during the video. All questions are listed with the answers of the students. Furthermore the correctness of the questions is displayed. It is clear that this is only possible automatically with multiple choice questions and not with text-based questions. For the latter it is required to analyze them manually. As an example Fig. 5 shows the analysis of the multiple choice questions. At the top there is the number of correct/wrong answers and below that the individual performance of each student is listed (Ebner et al. 2013; Wachtler and Ebner 2014b).

Fig. 3
figure 3

Timeline analysis. At the top the chart shows the number of users (green) and the number of views (red) along the timeline of the video. Below that exact numbers are printed by moving the mouse across the chart

Fig. 4
figure 4

History analysis. For each user it is possible to view which part of the video was watched. For that a red bar marks it in the timeline of the video. If the mouse is hovered over such a bar detailed information is displayed

Fig. 5
figure 5

Multiple choice questions analysis. For each multiple choice question the number of correct/wrong answers is displayed. Below that a list of the students shows their answers to the questions

Finally there are the users of a group called researches. Members of this group are allowed to download all recorded data as a spreadsheet. This includes the following items:

  • watched timespans of each student

  • the number of users and views per second

  • answers to the different types of questions

In addition there are some lists containing the names of the videos or the texts of the questions. These lists are needed for crossreferencing because the downloads mentioned above are only stating the IDs of the videos or the questions.

The focus of the current study lies on the distribution of the questions at pre-defined positions. This is done because it is important to know where to place these questions so that they are supportive to the attention of the students.

Case study

In the course of this study, we are investigating the effects of learning-videos on the learner’s success. The clear focus will be on the interactive component of the videos while the position of testing questions within the videos plays a major part. As mentioned above the following questions are explored: Does the time of occurrence of the first question influence its answer’s correctness rate? The hypothesis of Wachtler and Ebner (2015), which claims that the first answer has a higher trend of being wrong than the following ones (Lazy Start), will be examined carefully. Moreover, a close look at a possible relation between the length of breaks between questions and the correctness of their answers (Tight-Placed Errors) will be taken.

Moreover, there will be an outlook on the long term success of the study in this manuscript. It is measured with the results of a test which took place after half of the videos had been watched. In order to enable comparability, the very same test was given to students from another class who had been taught in a traditional manner. Both classes have been at a comparable level before the study which is a necessary requirement for speaking of possible comparability.

The test itself was constructed by a teacher who was neither involved in the production of the videos nor in the traditional teaching of the second class. Thus, the exercises of the test were not created by a biased person who may have influenced the outcome in a certain direction by inserting similar issues as in the teaching process. So, both classes had the same initial position for this test.

Traditional teaching, so the way the class which does not watch the learning videos is taught, means teacher-centered lessons where the pupils receive some direct input and are mostly actively working at home. Should – or rather as soon as – they encounter difficulties while solving problems, there is nobody there to give them a hint on how to resolve problems.

Study environment

The survey was conducted in the subject of mathematics with a class of an academic high school (BG Klusemann) which has an emphasis on STEM (Science-Technology-Engineering-Math). The vast majority of the 20 students of this class are between 16 or 17 years of age. Furthermore, the attendance was compulsory. All the videos share the main subject of differential calculus. Fifteen videos were produced for the study while only seven have been used in class so far at the time of the writing process of this paper. They cover all the required topics from the Austrian curriculum regarding differential calculus: monotonicity, maxima and minima, inflection points, saddle points, finding polynomial functions and the graphical construction of derivatives.

The learning videos are playing an important role in another study which deals with the concept of flipped classroom (Loviscach 2013). One can already assume from the concept’s name what this flipping of the classroom could mean: what is done at school in traditional teaching becomes what is done at home and vice versa. So, the input phase – watching the videos – is outsourced from the classroom and exercises, which used to be homework, are shifted into classes.

In order to enable interactivity features in the videos, they are embedded in the platform which is described in Section Interactions in learning-videos and context. The format of the questions which pop up while watching the videos ranges from open questions over true-false questions to multiple choice questions. They resemble the kind of questions which are used in combination with ARS (Camuka and Peez 2014). Their application can be divided into testing theoretical knowledge for true-false questions (see Fig. 6) and multiple choice questions (see Fig. 7) and testing practical understanding for open questions (see Fig. 8) in the majority of cases. Due to a need for testing theoretical knowledge for most cases, multiple choice questions outnumber the other formats.

Fig. 6
figure 6

True-false question. An example of an interactive true-false question in the learning environment

Fig. 7
figure 7

Multiple choice question. An example of an interactive multiple choice question in the learning environment

Fig. 8
figure 8

Open question. An example of an interactive open question in the learning environment

Comparability of the results can of course only be achieved if there is a balance in the complexity of the covered topics and the questions asked among all videos. Numerous individual topics which are naturally regarded differently in difficulty by pupils are contained in the collection of videos. Therefore, it has been attempted to distribute typically challenging topics to all videos in equal measure, respectively to compensate rather easy with rather tough subjects. For instance, the usually demanding topic of functions and their behavior at infinity, which requires abstract thinking, has been compensated with a video about a topic the pupils have already been confronted with in a previous academic year, namely the principles of extrema. Moreover, different videos which have a certain interval length (see Section Setting of the questions) in common have been compensated in terms of complexity among themselves wherever possible.

Setting of the questions

For the benefit of the learning success, the videos have been designed to be of minimal length (Bergmann and Sams 2012). The average duration of approximately twelve minutes for each video does not appropriately match the proposed length of intervals between the interactive questions (Wachtler and Ebner 2015). The recommendations from aforesaid paper had to be adapted.

The approach of setting a periodical interval length between occurring questions for each video can be attributed to the recommendations for ARS from Martyn (2007) as well. Therefore, it has already been well tested in a similar setting.

Interval lengths of 2, 4, 6 and 8 minutes, therefore step sizes of two minutes, have been recommended. The videos used in this study are only approximately 12 minutes long, while these in the study from Wachtler and Ebner (2015) are about 90 minutes long. Consequently, due to total video lengths of about one eighth in comparison, the choices of interval lengths are drastically shortened. Reducing the interval lengths to exactly one eighth would lead to an immensely high frequency of occurring interactive questions. Thus, a compromise of setting the minimal interval between occurring questions to 90 seconds and increasing them by the step size of 30 seconds leading to the final intervals of 1.5 minutes, 2 minutes, 2.5 minutes and 3 minutes was made. Eleven videos are taken into account for this study which feature the following distribution of interval lengths:

  • 1.5 minutes: used in 2 videos

  • 2 minutes: used in 2 videos

  • 2.5 minutes: used in 3 videos

  • 3 minutes: used in 4 videos.

In terms of the time of the first question pop-up, some adaptions were necessary as well: they first appear after either one, two, three or four minutes. For the eleven videos that are taken into account, the following distribution of times of the first occurring question were chosen:

  • after 1 minute: used in 3 videos

  • after 2 minutes: used in 3 videos

  • after 3 minutes: used in 3 videos

  • after 4 minutes: used in 2 videos.

Results

Data concerning different interval lengths

Due to the above listed interval lengths and the different total lengths of the videos, a varying number of questions occurs. This circumstance resulted in the questions of the videos which have an interval length of 1.5 minutes being answered correctly 86 times and wrongly 34 times. Questions which occur in the interval of 2 minutes are answered correctly 72 times and wrongly 53 times. A wrong-to-right distribution of 61 to 24 could be observed at videos with an interval length of 2.5 minutes and videos with the longest interval length were answered correctly in 48 and wrongly in 32 cases. In order to obtain meaningful results, these wrong-to-right distributions per interval length are illustrated as ratio in percent in Fig. 9.

Fig. 9
figure 9

Wrong-to-right ratio for videos of different interval lengths. The blue parts characterize the ratio of the correctly and the orange parts the ratio of the incorrectly answered questions per video type in terms of their interval length

Data concerning time of the first occurring question

With regard to finding a reasonable time of letting the first question pop up in the videos, the above said starting points of 1, 2, 3 and 4 minutes are examined. Again, the number of answers varies due to different numbers of viewers per video and the not-equally distributed classification of the videos. The answers of the first question with the earliest occurrence were correct 29 times and wrong 24 times. Choosing to let the first question appear after 2 minutes led to 16 right and 26 wrong answers. The first question after 3 minutes showed a right-to-wrong distribution of 30 to 9 and the final observation of letting the first question arise after 4 minutes resulted in 13 correctly and 7 incorrectly answered questions. In favor of meaningfulness, the ratios of wrong vs. right are illustrated in Fig. 10.

Fig. 10
figure 10

Wrong-to-right ratio of the first question. The blue parts characterize the ratio of the correctly and the orange parts the ratio of the incorrectly answered questions per video type in terms of the time of the occurrence of the first question

Contrast between first and remaining questions

The lazy-start theory includes the assumption that the first appearing question is answered incorrectly more often than the further questions in general. For proving this theory, the right-to-wrong ratio of all the first and all the remaining questions are determined and compared. The questions which are posted at the very beginning of the videos resulted in 88 correct and 66 incorrect answers. All the other questions show an overall right-to-wrong ratio of 309 to 161. An illustration of a transformation of these numbers into percentages is seen in Fig. 11.

Fig. 11
figure 11

Comparison of the answers of the first and all further occurring questions. The blue parts characterize the ratio of the correctly and the orange parts the ratio of the incorrectly answered questions for the first appearing questions on the left and for all others on the right

As in the other figures as well, the blue parts of the bars represents the share of correct answers and the orange parts represent the share of incorrect answers. The bars themselves stand for the first questions respectively all the other questions in the videos.

Long-term effects of the learning videos on students

The long-term effects on the learning results of the students are measured via a test. It was conducted approximately after half the videos had been released for the students’ availability. This test was planned independently from this study because tests are strictly positioned in the Austrian national curriculum for schools. This led to the unfortunate fact that not only the topics of the videos, namely differential calculus, but also some topics which were dealt with before and which do not have a direct correlation to the relevant issues, have been covered in the test. Thus, the test per se does not mirror the understanding of the topics from the videos a hundred percent. However, the predominant part of the test deals with the topics from the videos which means that its results are therefore clearly of importance for the actual learning effects of the videos on the pupils. Moreover, the exact same test was conducted in another class, which allows direct comparison between the two classes. The other class has been taught in a conventional manner, meaning via teacher-centered teaching in the classroom and homework exercises at home. The distribution of grades, which leads from 1 being the best to 5 being the worst in the Austrian school system, of the two classes which are compared in the study are seen in Table 1.

Table 1 A distribution of the grades in the two compared classes

Discussion

Discussion of the results of the first posed question / finding the optimal time of appearance for the first question

First of all, the hypothesis that a questions shows a higher rate of being answered incorrectly if it occurs too early in a video is examined. Figure 10 reveals that a question which arises after only one minute, so at about 8 % of the entire video duration, is answered correctly in approximately 55 % of all cases. Starting after two minutes, meaning at approximately 16 % of the total video duration, shows an even lower rate of success with less than 40 %. These two attempts seem to be poorly effective.

The third attempt, posting the first question after three minutes, roughly at a quarter of the video’s time, shows satisfactory results with the highest success rate of more than three quarters. The latest occurrence - after one third of the video duration - of the first question is slightly less successful again with almost two thirds of them being answered correctly.

Therefore, setting the first appearance of an interactive question too further at a later time is not advisable due to a decrease in efficiency. The results of these questions are still noticeably better than the ones which pop up after approximately 8, respectively 16 percent of the videos duration. Thus, it is recommended that the best time for the first appearance of a question in interactive videos is after about 25 % of its duration.

In addition to the already shown first part of the lazy-start hypothesis which means that setting the position of the first question’s appearance to approximately one quarter of the video’s duration, the generally worse results of the very first question have to be mentioned. This can clearly be illustrated by Fig. 11 which shows a comparison of the right-to-wrong distributions of the first and the other questions. So, the correctness rate increases in the course of the videos. That assumption is supported by the first question being answered correctly in 58 % and all other questions in about 68 % of all cases.

However, when these results are computed in terms of statistical significance, it is by far not improbable enough that the first answers may only be incorrect more often accidentally. The standardly calculated p-value of 0.37 (t=0.907, d f=20) speaks for itself. Still, there is a tendency towards the hypothesis that the first question is generally answered incorrectly more often than all others, but it is not significant. Thus, some further investigation of this distinct research question would be necessary to obtain satisfactory and relevant results. A future study with a larger number of videos could give some indication of whether a question being in first position or not is crucial for its right-to-wrong ratio. One possible reason for the weak start might be a lack of concentration at the very beginning. Moreover, the videos are designed to function as homework assignments and for usual, only one video is part of the preparation for a lesson. An exact exploration of the reasons will be made at the end of the practical part of the survey with the aid of interviews.

Discussion of the results of different interval lengths between questions

The “tight-placed errors”-hypothesis, which claims that questions after a short interval between appearing questions are more likely to be answered correctly, could not be verified. Figure 9 indicates that an interval length of 1.5 minutes leads to a relatively satisfactory rate of approximately 71 % of correct answers. In the case of an interval length of 2 minutes, about 58 % of all questions could be answered correctly by the students. The very best value, if only just, was achieved at an interval length of 2.5 minutes with 72 % of right answers. The longest interval length, namely 3 minutes showed respectable rates of correctness with an average of 65 % of all questions.

One can notice that there cannot be any obvious tendency observed. The values of 2.5 respectively 1.5 minutes exhibited the highest success rates but are separated by a lower rate at 2 minutes. Thus there is no real trend discernible and all the tested videos show a vaguely similar rate of correct answers. All in all, it can be said that all the interval lengths lead to adequate results and would be suitable for videos of a similar length to the ones used in our study. This observation might be resolved by the relative short lengths of the videos in comparison to the ones used by Wachtler and Ebner (2015).

So, we construct the new hypothesis that the interval length between appearing questions in videos of a length of up to around 20 minutes is rather irrelevant. These kinds of videos show similar success rates for all interval lengths. The relevance of having a closer look at the pauses between questions increases with the lengths of the videos.

Discussion of the long-term effects of the learning videos on students

The results of the test which was conducted in the class that has been taught with the concept of the flipped classroom and with the aid of the learning videos show a satisfactory distribution of grades with a mean and median of 3 and a standard deviation of 1.247.

In the class which dealt as a means for comparing the results and which was taught in a conventional way, the very same test led to considerably worse results. This class had a distribution tending to negative results with a mean of 3.948, a median of 4 and a standard deviation of 1.026.

When considering statistical testing for the hypothesis that the first class – the one taught with the learning videos – achieves better results than the other class, statistical significance leads to a p-value of approximately 0.014 (with t=2.573, d f=37). This value is clearly below the standard significance level of 0.05. The result clearly proves the better performance of the class which received their input via the learning videos in opposition to traditional teaching methods. In the class which watched the videos, two students managed to gain the highest grade, while no student succeeded in the same in the other class. Eight of the nineteen students from the latter got the worst grade there is in the Austrian school system, whereas only four of the twenty students from the class which used the learning videos failed the test.

This shows that the teaching method applied for this study has a definite positive influence on the students’ selective attention and therefore on the long-term success because of a clearly better performance in comparison with students who have not watched the videos and because of the satisfactory distribution of grades within the class.

Outlook

In order to attain some more expressive results when it comes to the "tight-placed errors" hypothesis, one would at least have to enlarge the study group or observe the behavior of the success rate with a higher number of videos per interval length. For that, the same interval lengths as presented in the Section Case study would be a good choice for videos of a similar length.

Moreover, one should further investigate on the created hypothesis that the impact of the interval length between appearing questions on shorter videos is not as high as same on longer videos. It is further recommended that videos with lengths of 30, 45 and 60 minutes are taken into account as well, in order to being able to find out from which video length onward the interval length becomes relevant for the correctness of the answers. Obviously, the respective interval lengths have to be adapted to the lengths because the number of appearing questions would simply become too large.

The long-term effect on students deserves to be observed more closely as well because the test in the middle of the study which contained some independent topics is not an entirely convincing source. A recommendation would be some testing after the topic has been completed and all the videos have been watched. Of course, no other topics should be included in this testing process. Furthermore, some group which has been taught in a traditional manner and can be used for a suitable comparison would also be needed. In the optimal case we recommend one group which has been taught in a traditional manner, one group which has been taught by traditional videos without interactions and one group which has been taught by videos with interactive components. This would facilitate the analysis of how success is dependent on interactive components.

Conclusion

This study deals with the application of interactive videos in math classes. For that several learning videos were created and enriched with different interactive questions. The distribution of the questions is based on some hypotheses identified by a previous study of Wachtler and Ebner (2015). With the current study the accuracy of these hypotheses is examined by taking short term as well as long term evaluations into account.

Based on the evaluation of the first hypothesis it is shown that too early appearing questions are prone to be answered incorrectly. Thus, it is advised to wait patiently until the first question pops up. At around one quarter of the entire video length has proven to be an adequate time for the first interactive question.

Apart from that, the hypothesis which claims that questions show a higher incorrectness rate when they are placed too densely one after another has been examined. This hypothesis, however, could not be confirmed in this work. The interval lengths between questions does not correlate with the correctness of their answers in the kinds of videos which were regarded for this study. So, the assumption that the distances between questions only have an impact on longer videos was made. A future work could deal with this new hypothesis.

Generally positive long-term results have been achieved throughout this study. These have been examined by a direct comparison between one class that has worked with the videos and one that has experienced conventional teaching methods. The first managed to obtain remarkably better results. It is important to return to the issue of long-term results in a future work with customized testing material which merely includes the topics relevant for the videos.

Based on these results the research question (see Section Introduction) is finally answered.

Abbreviations

ARS, audience response system; CAPTCHA, completely automated public turing test to tell computers and humans apart; LIVE, live interaction in virtual learning environments; MOOC, massive open online course; STEM, Science-Technology-Enginnering-Math

References

  • J Bergmann, A Sams, Flip your classroom: Reach every student in every class every day (International Society for Technology in Education, 2012).

  • A Camuka, G Peez, Einsatz eines “audience response systems” in der hochschullehre (2014). medienimpulse-online 2/2014:2–3, http://www.medienimpulse.at/articles/view/656. Accessed 22 April 2016.

  • A Carr-Chellman, P Duchastel, The ideal online course. Br. J. Educ. Technol. 31(3), 229–241 (2000). doi:10.1111/1467-8535.00154. http://dx.doi.org/10.1111/1467-8535.00154.

    Article  Google Scholar 

  • ES Cutrim, Using a voting system in conjunction with interactive whiteboard technology to enhance learning in the english language classroom. Comput. Educ. 50:, 338–356 (2008).

    Article  Google Scholar 

  • dpa, Webvideopreise für “Tubeclash” und Kelly Misses Vlog (2015). http://futurezone.at/digital-life/webvideopreise-fuer-tubeclash-und-kelly-misses-vlog/135.990.258, Accessed 22. February 2016.

  • M Ebner, Introducing live microblogging: how single presentations can be enhanced by the mass. J. Res. Innov. Teach. 2(1), 91–100 (2009).

    Google Scholar 

  • M Ebner, A Holzinger, Instructional use of engineering visualization: Interaction design in e-learning for civil engineering. Hum. Comput. Interaction Theory Pract. 1:, 926–930 (2003).

    Google Scholar 

  • M Ebner, J Wachtler, A Holzinger, in Universal Access in Human-Computer Interaction. Applications and Services for Quality of Life. Introducing an information system for successful support of selective attention in online courses (Springer, 2013), pp. 153–162.

  • C Haintz, K Pichler, M Ebner, Developing a web-based question-driven audience response system supporting byod. J. UCS. 20(1), 39–56 (2014).

    Google Scholar 

  • HJ Heinze, GR Mangun, W Burchert, H Hinrichs, M Scholz, TF Münte, A Gös, M Scherg, S Johannes, H Hundeshagen, MS Gazzaniga, SA Hillyard, Combined spatial and temporal imaging of brain activity during visual selective attention in humans. Nature. 372:, 543–546 (1994).

    Article  Google Scholar 

  • H Khalil, M Ebner, in International Conference on Higher Education Development. Interaction possibilities in moocs–how do they actually happen, (2013), pp. 1–24.

    Google Scholar 

  • E Lackner, M Kopp, M Ebner, in Proceedings of the 10th International Scientific Conference e-Learning and Software for Education. How to mooc?–a pedagogical guideline for practitioners, (2014).

    Google Scholar 

  • R Latessa, MD, Use of an audience response system to augment interactive learning. Fam. Med. 37(1), 12–14 (2005).

    Google Scholar 

  • F Lehner, Interaktive videos als neues medium für das elearning. HMD Praxis der Wirtschaftsinformatik. 48(1), 51–62 (2014). doi:10.1007/BF03340549. http://dx.doi.org/10.1007/BF03340549.

    Article  Google Scholar 

  • J Loviscach, Moocs und blended learning–breiterer zugang oder industrialisierung der bildung. MOOCs–Massive Open Online Courses Offene Bildung oder Geschä, ftsmodell, 239–256 (2013).

  • M Martyn, Clickers in the classroom: an active learning approach. EDUCAUSE Q. 30:3(73) (2007). https://net.educause.edu/ir/library/pdf/EQM0729.pdf.

  • J Moran, R Desimone, Selective attention gates visual processing in the extrastriate cortex. Science. 229:, 782–784 (1985).

    Article  Google Scholar 

  • G Salomon, Television is easy and print is tough: The differential investment of mental effort in learning as a function of perceptions and attributions. J. Educ. Psychol. 76(4), 647 (1984).

    Article  Google Scholar 

  • RM Shiffrin, GT Gardner, Visual processing capacity and attentional control. J. Exp. Psychol. 93(1), 72–82 (1972).

    Article  Google Scholar 

  • H Spitzer, R Desimone, J Moran, Increased attention enhances both behavioral and neuronal performance. Science. 240:, 338–340 (1988).

    Article  Google Scholar 

  • JR Stowell, JM Nelson, Benefits of electronic audience response systems on student participation, learning, and emotion. Teach. Psychol. 34(4), 253–258 (2007).

    Article  Google Scholar 

  • TEDEd, TEDEd - Lessons worth sharing (2016). http://ed.ted.com/, Accessed 22 April 2016.

  • C Tembrink, M Szoltysek, H Unger, Das Buch zum erfolgreichen Online-Marketing mit YouTube (O’Reilly Verlag, 2013). https://books.google.at/books?id=eSCwAgAAQBAJ.

  • B Tobin, Audience response systems, stanford university school of medicine, (2005).

    Google Scholar 

  • J Wachtler, M Ebner, in Learning and Collaboration Technologies. Designing and Developing Novel Learning Experiences. Attention profiling algorithm for video-based lectures (Springer, 2014a), pp. 358–367.

  • J Wachtler, M Ebner, in World Conference on Educational Multimedia, Hypermedia and Telecommunications, vol. 2014. Support of video-based lectures with interactions-implementation of a first prototype, (2014b), pp. 582–591.

  • J Wachtler, M Ebner, in EdMedia: World Conference on Educational Media and Technology, vol 2015. Impacts of interactions in learning-videos: A subjective and objective analysis, (2015), pp. 1642–1650.

    Google Scholar 

  • YouTube, YouTube Home Page (2016). https://www.youtube.com/. Accessed 22 April 2016.

  • Zaption, Zaption: Turn online videos into interactive learning experiences that engage students and deepen understanding (2016). http://www.zaption.com/. Accessed 22 April 2016.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Josef Wachtler.

Additional information

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wachtler, J., Hubmann, M., Zöhrer, H. et al. An analysis of the use and effect of questions in interactive learning-videos. Smart Learn. Environ. 3, 13 (2016). https://doi.org/10.1186/s40561-016-0033-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40561-016-0033-3

Keywords