Review of explicit methods
In this section, we present identified methods that have already been used to model the learner explicitly, along with examples.
Evaluation of questionnaire
Learner modeling can be performed through evaluating the learner's answers to a questionnaire at the beginning, during or at the end of the educational game. It can also be done by redirecting learners to an online survey and asking them to answer questions (Moreno-Ger et al. 2007; Huang 2011; Fu et al. 2009; Pourabdollahian et al. 2012).
For example, Moreno-Ger et al. (2007) evaluated the learner’s level of knowledge as part of learner modelling through a questionnaire and some basic tests to capture the learner’s initial level of knowledge. The learners were then provided with an educational game, which also included an in-game exam. The results of that exam were used to grade the learners. Finally, the final in-game exam was used to assess the learners’ proficiency. In another example (Huang 2011), after completing the educational game, learners were asked to self-report the mental effort investment level and the difficulty level associated with the learning task on a 9-point Likert scale.
A scale named EGameFlow was used by Fu et al. (2009) to measure learners’ enjoyment of educational games. This Likert-type scale contained 42 items, with 1 and 7 representing respectively the lowest and highest degree to which respondents agree with the items. Similarly, in Pourabdollahian et al. (2012), learners were requested to answer a questionnaire in order to evaluate their level of engagement. The questionnaire contained 21 questions based on 5-point Likert scale and categorized according to five classifications (challenge, immersion, interest, purpose and control). The data collected from the survey was analyzed by both descriptive and inferential statistics for evaluating the engagement of learners.
Interpreting body posture and physiological signals
Various hardware and software equipment have also been used for modeling the learners. The behavior of the learners can be recognized through identifying gestures, body posture and physiological signals. It is done by identifying and labeling different reactions to game events and then looking for common features of body expressions (Peters et al. 2009; Amershi et al. 2006; Conati 2002; Conati and Maclaren 2009; Conati and Maclaren 2009a; Rebolledo-Mendez and de Freitas 2008; Jimenez et al. 2011).
Rebolledo-Mendez and de Freitas (2008) and Jimenez et al. (2011) used Electro Encephalogram readings using Brain Computer Interface (BCI) for psychometric input to measure attention. BCI offered the possibility of reading electric signals generated by neural activity in the brain, which could be used to assess the learners’ attention levels. In another example (Peters et al. 2009), learner's level of attention was evaluated using two components. The first component detected user’s gaze behavior based on input from a web camera. This component used the direction of eye or head to establish the screen-space coordinates of where the user was looking. The second component measured attention levels using a neurophysiological recording device.
Electromyography sensors have also been used to detect several types of bodily expressions Conati (2002); Conati and Maclaren 2009; Conati and Maclaren 2009a) to recognize rapidly changing emotions. Likewise, Amershi et al. (2006) collected data on game events and learners’ biometric expressions to form a picture of the most common biometric expressions of emotion towards events within the educational game. Learners’ biometric expressions were recorded using 4 sets of sensors. Each biometric recording was synchronized with logs of the game events that could stimulate an emotional reaction.
Review of implicit and unobtrusive methods
Several methods have been used to generate learner models implicitly. In this section, we will identify and discuss methods that have been used in literature.
Translating learner's actions
In this method, the set of actions made by the learner during the game can be interpreted and translated into descriptive information useful to model the learner. In addition, experts or automated systems define an appropriate description for each set of actions. These descriptions are then used to model the learner (Lessard 2012; Conati and Zhao 2004; Manske and Conati 2005; Stathacopoulou et al. 2004; Virvou et al. 2003; Katsionis and Virvou 2004; Conlan et al. 2009).
For example, the learner model in Lessard (2012) aimed at creating a representation of learner’s cognitive traits. To do that, the study established connections between the behavior of the learner and his/her actions in the game. Specifically, the manifestations of traits that the learner may exhibit during game-play were identified. This could provide evidence of the learner’s cognitive abilities.
The learner model’s goals in Conati and Zhao (2004) and Manske and Conati (2005) were to generate assessment of learner’s knowledge on number factorization. Knowledge assessment was done by representing the probabilistic relations between learner’s actions and the corresponding learner’s knowledge.
Learner's motivational state was assessed in Stathacopoulou et al. (2004). For that, the learning environment stored in a log file all the available information about what a learner was doing, recording each learner action with a time stamp. Typical examples of learner actions included selection of objects for experimentation, selection of available tools, mouse moves, mouse drags or clicks on tools or objects, mouse drags when the learner was trying to draw a vector, as well as the time when the action was performed. After that, expert teachers defined actions related to each type of evidence of learner’s motivational state.
In another example (Virvou et al. 2003), inferences were drawn about the learners’ feelings and reactions depending on the time they spent before and after they made some actions in the educational game. For example, the time that a learner took to answer a question was used to measure the degree of speed of the learner. The pausing time after a system’s response was used to measure the degree of surprise that the response might have caused to the learner. The number of times a learner pressed the “backspace” and “delete” button while forming an answer was used to measure the degree of certainty of the learner concerning a particular answer. Similarly, mouse movements without any obvious intent were used to measure the degree of concentration or frustration on the part of the learner.
Learner’s emotional state was assessed in Katsionis and Virvou (2004). For that, computer logging was used to record learners’ actions while they interacted with the educational game. The collected data was analyzed by five human experts who also observed learners’ actions while the learners played the game and noted down what the learners were likely to have felt.
The learner model’s goal in Conlan et al. (2009) was to generate a real-time evaluation of a learner’s skills. The skill assessment engine component was responsible for translating each learner’s actions within the game into a list of probabilities that showed the likelihood of each relevant skill having been acquired by the learner. The learner model determined which skill states had increased or decreased in probability.
Interpreting interaction traces
In this method, the game engine monitors and records interactions between the learner and the game, generating a trace of actions performed by the learner during the play sessions and tracking a lot of different parameters in the game (start game, end game, quit game, phase changes, significant variables, time, performance, tasks completed, etc.). At the end of the game play, these traces can be delivered to a group of experts or automated systems in order to identify what information can be extracted from these traces and infer a lot of useful information for modeling purposes (Moreno-Ger et al. 2009; Stathacopoulou et al. 2007; Hou 2012; Choi et al. 2005; Serrano-Laguna et al. 2012; Bouvier et al. 2013a; Bouvier et al. 2013b).
For example, learner’s learning style was modeled in Stathacopoulou et al. (2007) by monitoring the learner’s actions over time, where each response, such as a keystroke, mouse move or drag was timed and recorded. The learning environment stored in a log file all available information about what a learner was doing, recording each learner action with a time stamp. Furthermore, this work used teachers’ expertise in order to select the appropriate measures of learner’s observable behavior to serve as indicators of the learner’s learning style.
Learner’s learning experiences were assessed in Moreno-Ger et al. (2009) by monitoring and recording interaction between the learner and the educational game, generating a trace of the actions performed by the learner during the play sessions, tracking a lot of different parameters in the game and assigning points to the most relevant actions. These traces were then delivered to the instructors in order to inform them how the learner interacted with the educational game and infer a lot of information from these traces for assessment purposes.
The educational game in Hou (2012) recorded every complete motion and event that took place during the game in order to model learner’s behavior. In this work, a total of 82 unique items were recorded, including various motions (e.g., pressing a certain button) and events (e.g., completing a certain task) in the game (e.g., players talking with others, entering a new task, starting a new battle and changing costumes for the role). These 82 items were divided into 10 behavior categories. Each category had a name, label, and description.
The learner model in Choi et al. (2005) generated assessment of learners’ knowledge by keeping track of the learner’s behaviors during the game. The learner model had access to information such as learner moves and tools accessed by the learner. The collected data was then interpreted to determine the state of learner’s knowledge.
In another example Serrano-Laguna et al. (2012), high-level assessment report of the learners was generated by selecting types of traces that must be logged to facilitate assessment. The system then identified what information could be extracted from the traces. Finally, the system analyzed learner’s traces and produced reports for teachers.
Learners’ engagement was identified in Bouvier et al. (2013a) and Bouvier et al. (2013b) by relying on their traces of interactions performed in the learning game. Studies of Bouvier et al. (2013a) and Bouvier et al. (2013b) proposed an approach that consists of three stages. The first stage aims to determine the high-level engaged-behaviors. The second stage aims to characterize these engaged-behaviors by identifying the underlying chains of actions. The last stage aims to detect these chains of actions among all the actions recorded.
Interpreting conversation
In this method, information about the learner is extracted from his/her communication with a Non Player Character (NPC). The NPC poses questions and records learner's choices during these conversations and questions answered during game dialogues (Moreno-Ger et al. 2007; Muñoz et al. 2010; Muñoz et al. 2011a; Muñoz et al. 2011b; Rebolledo-Mendez et al. 2008).
For example, learner’s learning style and learner’s preferences were modelled in Moreno-Ger et al. (2007) by interpreting conversations between the learner and NPC during the game play. In particular, the NPC queried the learner on his/her preferences and asked other questions. Depending on the learner's choices during these conversations, the game detected his/her learning style and also detected his/her preferences.
Learner’s emotions were assessed in Muñoz et al. (2010), Muñoz et al. (2011) and Muñoz et al. (2011a) by analyzing the answers of the learner during the educational game dialogues.
A NPC in Rebolledo-Mendez et al. (2008) perpetually monitored the learner and requested her/him to answer questions. The system collected and processed the learner responses. After that, it calculated learner's level of motivation. Then it selected one rule from a set of rules taken from theory and reacted with aim to sustain or enhance the current level of motivation.
Interpreting learner’s errors
The goal of this method is to retain the information of what the learner has learnt, and what the learner has learnt incorrectly. In addition, this method records learners’ misconceptions, errors, number and types of mistakes made by the learner, learner failure and success, and time taken to complete the game (Virvou et al. 2003; Champagnat et al. 2010; Virvou et al. 2002; Rebolledo-Mendez et al. 2008; Cheng et al. 2009; Fareed et al. 2010; Khenissi et al. 2013a).
For example, the learner model in Virvou et al. (2003) generated assessment of the learner’s level of knowledge by keeping track of right and wrong answers that the learner had given and the time he/she had spent to give a correct one.
In another example Fareed et al. (2010), an evaluation of recall improvement of learner memory was generated by defining recall accuracy as the ratio of the number of questions attempted correctly to the total number of questions.
Learner’s failure and success were pursued in Champagnat et al. (2010). In the case of failure, the game had to be tolerant with the learner. This means that the game allowed the learners to continue and did not stop them from playing or reiterating the stage, but with penalties (e.g. giving them less points or worse results).
The learner model in Virvou et al. (2002) examined the correctness of the learners’ answers in terms of the learners’ factual knowledge and reasoning they used. In addition, the learner modeling component performed error diagnosis based on a cognitive theory of Human Plausible Reasoning Collins and Michalski (1989). In particular, when a learner was asked a question from a domain, HPR was used to perform error diagnosis in case of an error and to find out how close the erroneous answer had been to the correct one.
The learner model in Cheng et al. (2009) assessed the learner's degree of mastery by monitoring his/her performance and comparing it with information stored in a table called Conditional Probability Table (CPT). The CPT stored the probability of having a wrong answer even when concepts were seen to be mastered. Taking into account the CPT, the learner model detected evidence of mastering the subject and also detected evidence of deficiency at a skill or concept level.
An NPC in Rebolledo-Mendez et al. (2008) requested the learner to answer questions. After that, the system processed the learner's responses as either correct or incorrect and collected contextual information such as number of correct/incorrect answers and time taken to answer such questions. Based on this information, the system took decision about the appropriate help.
The learner model in Khenissi et al. (2013a) detected learner's deficiency in programming subjects. To do so, the model kept track of the learner’s wrong answers when he/she arranged a set of unordered instructions in order to form a program. In particular, when the learner moved an instruction in an inappropriate place, the learner model recorded the incorrect answer and provided the learner with the appropriate help.
Path follow
The goal of this method is to follow the path of the learner during the game play, record the learner’s trajectory towards a goal and then interpret and infer information about the learner. In fact, each path has a specific meaning and leads to specific information about the learner (Moreno-Ger et al. 2008; Noguez et al. 2009; Rebolledo-Mendez et al. 2009; Kopeinik et al. 2012).
For example, Moreno-Ger et al. (2008) described the game as state transition systems in order to assess the learner’s activity inside the game. During the game play, actions of the learner triggered state transitions and the sequences of actions led to one or more end states. The game engine kept track of transitions, checked the states that the game went through and generated reports describing them.
A NPC within the educational game in Rebolledo-Mendez et al. (2009) monitored the learner and identified his/her learning trajectory. The identified learning trajectory was compared to different predefined learning trajectories. This comparison assisted the learner model in taking a decision on how to proceed.
In Noguez et al. (2009) and Kopeinik et al. (2012), a given domain was represented in terms of its entities, properties, and relationships. The entities and their relationships provided a general scheme. This scheme facilitated tracking of the learner’s path during the game play and then inferred useful information about her/him.
Taxonomy of learner modeling methods
In this section, taxonomy of learner modeling methods which use educational games will be described. This taxonomy was created based on the literature review discussed in previous sections. Figure 1 presents the organization of the key terms of our taxonomy from the most general to the most specific.
From the literature, we observe that learner modeling using educational games can be achieved explicitly or implicitly. The explicit way can be accomplished using a questionnaire (Moreno-Ger et al. 2007; Huang 2011; Fu et al. 2009; Pourabdollahian et al. 2012). Questionnaires can solicit direct and precise answers, but this would endanger the high level motivation provided by educational games because such use would require stopping the learner from playing and requesting her/him to answer questions. Alternatively, the use of hardware and software equipment provides an explicit way to model the learner while using educational games (Peters et al. 2009; Amershi et al. 2006; Conati 2002; Conati and Maclaren 2009; Conati et al. 2009a; Rebolledo-Mendez and de Freitas 2008; Jimenez et al. 2011). This method can provide additional information for modeling the learner. However, it obviously requires the use of additional hardware and software equipment. Furthermore, observations collected using these equipment can be interpreted in different ways and this can endanger the reliability of the learner models.
Learner modelling can also be achieved implicitly. This is done by recording actions of the learners while they are using educational games (Action Modelling) (Hou 2012; Serrano-Laguna et al. 2012; Lessard 2012; Moreno-Ger et al. 2009). The set of actions made by the learners during the game play can be interpreted and translated into descriptive information useful for modeling the learners.
Conversation Modeling is an implicit way to extract information about the learners (Muñoz et al. 2010; Muñoz et al. 2011; Muñoz et al. 2011a; Rebolledo-Mendez et al. 2008). It essentially models the communication between the learner and the non-player character. It records the learner's choices during these conversations and questions answered during game dialogues. It also records conversations with fellow players.
Moreover, Perturbation Modeling is used to extract information about the learners in an unobtrusive way (Champagnat et al. 2010; Rebolledo-Mendez et al. 2008; Cheng et al. 2009; Fareed et al. 2010). It interprets information of what the learner has learnt, and what the learner has learnt incorrectly. It also records learners’ misconceptions or errors, and the learner’s failure and success.
Finally, Strategy Modeling is also used to implicitly elicit information about the learners (Moreno-Ger et al. 2008; Noguez et al. 2009; Rebolledo-Mendez et al. 2009; Kopeinik et al. 2012). It concerns long-term learner behavior during the use of an educational game as composed of a series of learner actions. Learner strategy can be monitored by following learner paths inside the game.
Educational games used to model learner
This section describes educational games that have been used to model the learner. Most of these educational games include mechanism to extract specific information about the learners. These educational games are classified according to their relation to the taxonomy described in Section 4.
Learner modeling using questionnaire
Trade Ruler: Trade Ruler was designed to teach general public about the importance of trade between countries. The player in this game plays the role of the “trade ruler” of island. The ruler is responsible for managing the island’s production on its labor-intensive (jeans) and capital-intensive (cell phones) products. The ruler has to decide what to trade with its trading partner as some islands are better for manufacturing labor-intensive products while others might be advantageous in making capital-concentrated goods. So, the main goal of the ruler is to maximize the islanders’ welfare (Huang 2011). This educational game is accessible online (Trade 2015).
Hands-on OS game: Hands-on OS game was designed to introduce learners the common problems associated with the operating system of a computer. The main goal of this game is to enhance the proficiency of the learner in certain skills related to the computer’s operating system (Fu et al. 2009).
Set Based Concurrent Engineering game: The main goal of this game is to help practitioners in designing a simplified airplane by following a simplified approach. The game is divided into two stages. In the first stage, players are requested to design an airplane for a given list of customer requirements without following a specified process. In the second stage, players are provided with the necessary instruments to execute a specified process. Once players complete the design, a comparison of performances is presented to the players to compare the two stages in terms of total development cost, time and quality (Pourabdollahian et al. 2012).
Learning version of Pacman game (LPG): The LPG aims to motivate learners to correctly answer the questions of the programming languages. In the traditional version of the Pacman game, players control the Pacman through a maze, eating pac-dots. When all dots are eaten, the Pacman is taken to the next stage. In addition, there are four enemies who roam the maze, trying to catch the Pacman. If an enemy touches the Pacman, a life-chance is lost. Near the corners of the maze are four dots known as power pellets that provide the Pacman with the temporary ability to eat the enemies. When all life-chances have been lost, the game ends. In the Learning version of the Pacman Game, when the Pacman eats a power star, the learner has to respond to a question (about the programming language) in order to continue the game having a 'reverse' role (the Pacman can move freely and eat the enemy for a short period) (Khenissi et al. 2013b). This educational game is accessible online (LPG 2015).
Learning modeling using hardware and software equipment
Prime Climb: Prime Climb is an educational game designed to help 6th and 7th grade students practice number factorization. Prime Climb consists of series of mountains. Each mountain is divided into hexes labelled with numbers. Two players must collaborate to climb these mountains. Each player can only move to a number that does not share any common factor with the partner’s number. If a wrong number is chosen, the climber falls and swings from the rope until the player selects a correct number. During the game, each student has a pedagogical agent which provides support. This educational game was cited in many works for different purposes (Conati and Maclaren 2009; Conati et al. 2009a).
Learning version of Second Life: Second Life is an open-ended virtual world which offers users opportunities to define virtual experiences with other users. In the learning version, an artificial intelligence NPC poses questions to the learners. In addition, this NPC uses a predefined set of reactions and has limited conversation with the learners. The NPC asks questions in a multiple choice question format (Rebolledo-Mendez and de Freitas 2008).
Action modeling
Talking Island: Talking Island is an educational Massively Multiplayer Online Role-Playing Game (MMORPG) designed to teach English vocabulary and conversational skills to elementary school students. It includes elements that are generally found in an MMORPG (e.g. team-work, battles, pets, and role-playing situations). The learner can practice the pronunciation through the voice-recognition module included in this game. IThis module diagnoses the correctness of the pronunciation and determines whether the learner reaches a new level or passes a quest. In addition, this game allows players to team up, communicate via voice or texts, and solve quests jointly (Hou 2012). This educational game is accessible online (Talking 2015).
Virtual Reality Game for English: It is an educational game that was designed to teach English orthography and grammatical rules. Learners navigate through the virtual world and answer questions concerning English spelling. In this virtual world, there are three types of animated agents: the advisor, the guard (who acts as a virtual enemy) of a passage and the learner’s companion. The advisor agent appears in situations where the learner has to read new parts of the theory or has to repeat parts that s/he appears not to know well. The virtual enemy agent is responsible for asking questions to the learners. Finally, the virtual companion appears to make some remarks in a casual way as if a friend was talking to the learner (Katsionis and Virvou 2004).
Vectors in Physics and Mathematics: It is a discovery learning environment. It was designed to help students in constructing the concepts of vectors in physics and mathematics in secondary schools taking into account the conceptual difficulties that the students faced. The thematic units of the learning environment are: Position and Displacement; Motion; Forces and Equilibrium; Forces and Motion; and, Forces and Momentum. Each of these units contains several scenarios, which refer to real life situations (Stathacopoulou et al. 2007; Stathacopoulou et al. 2004).
Balance game: The balance game was developed with the purpose of guiding students to understand the concept of moment. Moment is a quantity representing the magnitude of a force influencing the rotation of an object. The object represented in the game is a bar lying on a wheel that acts as the pivot for the system. Two boxes at the sides of the bar represent two forces acting on the beam. The learner has to balance and control the movement of the bar by changing the force acting on one side of the beam. This is made by catching balls of different weights (Cheng et al. 2009).
ELEKTRA: It is a narrative-driven adventure game. In this game, the learner has to solve several physics-oriented puzzles. A virtual character (representing the ghost of Galileo) guides, advises and encourages the learner during the game (Conlan et al. 2009). The ELEKTRA project is accessible online (ELEKTRA 2015).
Learning Version of Memory Match Game (LMMG): It is specifically designed to foster second language acquisition. Memory Match Game is a card game in which all of the cards are laid face down on a surface. The objective of the game is to turn over pairs of matching cards with the least possible tries. In the traditional version of the memory match game, all cards hold only visual information. However, in the learning version of this game, other types of information have been added. Specifically, graphics information is kept and additional sounds, words and mathematical calculations have been added alongside. The LMMG encourages learners to watch, read and listen to the contents of each card and then try to match each pair (Khenissi et al. 2014a; Khenissi et al. 2014b). The LMMG is accessible online (LMMG 2015).
Conversation modeling
PlayPhysics: It is a role playing game in which the learner plays the role of an astronaut on a mission. The main goal is to save the mentor (Captain Richard Foster), who has been trapped on a space station, and reestablish control over the space station Athena. To achieve the main goals, the learner has to overcome challenges by applying the principles and the concepts of physics (Muñoz et al. 2010; Muñoz et al. 2011; Muñoz et al. 2011a).
<e-Adventure > educational game engine:
<e-Adventure > is a game engine designed to facilitate the creation of educational games. Games delivered by the e-Adventure engine are point and click adventure games. During the game, the learner can learn by interacting with the objects, consultation of in-game books and conversations with other characters. The < e-Adventure > engine includes mechanisms that monitors the learner’s activities and then provides adaptation and assessment (Moreno-Ger et al. 2007a; Moreno-Ger et al. 2007b). The < e-Adventure > platform is accessible online (E-Adventure 2015).
Perturbation modeling
VR-ENGAGE: It is a virtual world in which the learner navigates through with the goal of finding the book of wisdom, which is hidden. Several obstacles challenge the learner while navigating through the virtual world. A dragon guard (who acts as a virtual enemy that closes doors) poses a question to the learner from the domain of geography. Depending on the correctness of the learner’s answers, the dragon allows her/him to continue his/her way through the door, which leads her/him closer to the “book of wisdom” (Virvou et al. 2002).
Instruction Right Place Game: This game allows learners to benefit from the drag and drop technology to construct a program (from any programming language) in an amusing way. This educational game breaks down complex programming tasks and guides learners through a series of small steps to form a program interactively (Khenissi et al. 2013a). This educational game is accessible online (IRPG, 2015).
Strategy modeling
80Days: 80Days is an adventure game used to teach geography for a target audience of 12 to 14 years old and follows European curricula in geography. The game story is about an alien scout called Feon who kidnaps a Boy (play character) and travels with him around the world in a spaceship to collect relevant geographical information. The player assists the alien in exploring the planet and creating a report about the Earth and its geographical features. In the course of the game, the player discloses the aliens’ real intention which is to prepare for the conquest with the earth. The player has to save the planet and the only way to do it is to draw the right conclusion from the traitorous Earth report (Kopeinik et al. 2012; Kickmeier-Rust et al. 2011). The 80Days project is accessible online (The 80Days 2015).