Skip to main content

TrueBiters, an educational game to practice the truth tables of propositional logic: Development, evaluation, and lessons learned

Abstract

Since years, the logic course in the first year of our Bachelor program in Computer Science, dealing with propositional and predicate logic, suffers from poor pass rates. The students perceive the formal and abstract mechanisms of logic as difficult and awkward to deal with. Many students do not study the course material on a regular basis, which results in poor mastery of the basic concepts and principles. Consequently, students often fall behind as the course progresses. Previous attempts to remedy this procrastination behavior had little success. Since educational games are commended as an enjoyable way to foster learning, we decided in 2016 to develop an educational game for the course. This game, called TrueBiters, is a two-player competitive game inspired by an existing card game. We adapted this card game to propositional logic and digitized it as an app. The development was done in an iterative way. Each version was evaluated and using the received user feedback and evaluation results the game was improved and reevaluated. Based on the results of the evaluations we can conclude that the game is well suited for its target audience, i.e., logically-mathematically intelligent people, and is a good supplement, and even replacement for some of the traditional face-to-face exercise sessions. However, we also learned that a proper embedding into the course is needed to ensure that all students actually use the game and benefit from playing it. In this paper we present the game, explain and motivate its evolution, discuss the evaluations performed, and present lessons learned.

Introduction

For many years, the logic course in the 1st year of the bachelor program of Computer Science at our university has been a stumbling stone for students. Over the years, on average less than 30% of the students succeeded in the exam on their first attempt. We identified several reasons for this. (1) Unfamiliarity/Lack of ability: Our first year students do not seem to be familiar with using a formal language with a high level of abstraction; most of them perceive this as difficult. (2) Lack of motivation: although the course starts by explaining and illustrating the relevance and usefulness of logic for Computer Science, the students easily lose interest. Probably this is because appealing applications of logic only appear much later in the study program. (3) A lot of students exhibit procrastination: The course starts quite easy with introducing the basic concepts of logic (propositions and truth-values; logical operators, formula and truth tables) and although there are weekly mandatory exercises sessions, many students do not feel a need to practice these basic concepts. However, due to the fact that further learning material in the course is built on these concepts, a proper mastery of the basics is essential to benefit from the rest of the lectures and exercise sessions. Students, who cannot easily deal with the fundamental logical concepts, quickly fall behind as the course progresses, resulting is even less motivation and more procrastination behavior. We have tried to remedy this lack of motivation and the procrastination behavior in different ways, i.e., by enriching the teaching material with illustrations, giving more concrete examples, the paradox of the week, and mandatory homework. However, these efforts were not significantly successful. Since educational games are commended as an enjoyable way to foster learning and given our research on serious games, we therefore decided to develop an educational game for the course and investigate whether this could be used as a trigger to engage students with this topic and let them practice more.

The game is inspired by the two-player competitive card game “bOOleO” on Boolean logic (“bOOLeO,”, n.d.). We replaced Boolean logic by propositional logic and made it digital. The digitalization has the advantage over a tabletop game of allowing for the automatic verification of the correct use of the rules of logic during gameplay. In addition, we could also provide a single-player version in this way. The game was developed for smartphones because most students have one and playing games on smartphones is a popular activity among youngsters. The development was done in an iterative way. Each version was evaluated, improved, and reevaluated.

The paper is organized as follow: we first review other educational games related to teaching logic. Next, we describe the principles of the gameplay, the different iterations in the development and we discuss the different evaluations. These sections are followed by a discussion on the results and lessons learned. The paper ends with a conclusion.

Related work

One of the first educational games related to logic was Robky’s Boots (Burbules & Reese, 1984). In this game, children are introduced to the basic operations of logic (AND, OR, and NOT), which they can use to construct “machines” using electrical components. According to the authors, the game was perceived as “intrinsically enjoyable and interesting by its players” (Burbules & Reese, 1984).

A prototype of a learning environment for teaching binary arithmetic and logic gates (AND, OR, NOT and XOR) is described in (Waraich, 2004). The environment uses a fantasy narrative, i.e., a computer on a ship that acts as the tutor and tests the player’s understanding through a series of tasks. The results of the evaluation on learning using pre and post-tests and a control group showed that the game improved the test scores of the players. Furthermore, the players perceived the game as enjoyable.

Schäfer et al. (2013) presents a game-based multi-touch table environment for learning and practicing propositional logic. The authors also address the problem of low retention and high dropout rates of computer science students in the early phase of the study due to topics such as mathematical logic. The focus of this game is the method of resolution in propositional logic. The logical concepts are taught in an abstract way, similar as in textbooks, and the practicing is also done on the same level of abstraction. The results of the evaluations indicated that the game was perceived as easy to use, helpful, fun and motivating. However, the evaluation focused mainly on usability and player experience and not on the learning outcomes.

In Øhrstrøm, Sandborg-Petersen, Thorvaldsen, and Ploug (2013), the development of web-based tools, Prolog+CG, for teaching logic is described. These tools concern syllogisms and propositional arguments and are using gamified quizzing. As an example, for syllogisms, the learner is given an argument and must then evaluate its validity. The user wins after giving enough correct answers in a sequence. Another tool that uses similar methods is LogicPalet (“LogicPalet,”, 2019) designed to help mastering the basic concepts of logic. Although these kinds of tools may be valuable, they cannot be classified as games, despite using some gamification techniques.

What distinguishes our game from these works is that our game focuses solely on practicing the truth tables. We opted for this simple and single goal to ensure that students would master these truth tables thoroughly. We also included the IMPLY and EQUIVALENT operators because students generally perceived these operators as difficult to utilize. Furthermore, we tried to reduce the abstraction level of the subject matter by not using the abstract symbols typically used in propositional logic.

With respect to non-digital games, “bOOleO”, the game that our game is based on, is a strategy card game based on the principles of Boolean logic. It is a two-player competitive game, and the goal is to reduce a list of bits by building an inverse pyramid using logical gates. Similarly, in (Hicks & Milanese, 2015), a board game, “The Logic Game” was proposed for learning the truth conditions for the logical operators. An experiment with students taking a logic course showed that playing this game had a significant impact on their skills and understanding of logic. Similar to our game, this game is presented as complementary to the regular teaching. The principles of the game are very similar to ours, but it is using the regular symbols for the truth-values and operators, and in this way does not deal with the issue of logic’s formal and abstract notation. Other non-digital games to teach propositional logic are WFF ‘N PROOF (Allen, 1965) and the propositional logic card games proposed in (Shiver, 2013), but these games focus on learning inferences or proofs in logic, which is different from our goal.

Principles of the gameplay

The goal of our game, called TrueBiters, is to provide an engaging game for students to practice the truth tables for the basic logical operators of propositional logic: AND, OR, IMPLY, EQUIVALENT, and NOT. TrueBiters is a multiplatform game (Android, iOS, Web) that can be played alone or by two players, either on a shared device or over an Internet connection with two devices.

The game starts by generating six random binary values (bits), representing truth-values; 1 represents TRUE and 0 represents FALSE. These six bits are placed at the top of a reverse pyramid containing empty tiles (see Fig. 1) that the player must fill up in such a way that the bottom tile corresponds to the rightmost value of the initial list of bits. Filling the pyramid is done turn by turn, by applying each time an available binary operator on two bits; in this way two bits are reduced into one bit. For instance, the OR-operator applied on 0 and 1 will result in the value 1. In the two-player mode, the opponent has the same goal but for a pyramid starting with all six bits inverted. Each player has a limited number of logical operators at her/his disposal to fill the pyramid. At every turn, the player can perform one reduction. The first player to achieve her/his goal wins. In the single-player mode the player wins by completing the pyramid correctly with the available operators.

Fig. 1
figure 1

The reverse pyramid with the starting bits at the top

To make the goal of reducing bits less abstract, the binary operators are represented by cute monsters (see Fig. 2). The monsters, representing binary operators, eat two bits and spits out one bit. By using monsters instead of the regular abstract symbols for the logical operators, we aimed to reduce the level of abstraction and make the reduction process more concrete. However, to keep the link with proposition logic, the name of the operator (AND, OR, IMPLY, …) is mentioned on the monsters. For the same reason, we used the values 1 and 0, instead of the values TRUE and FALSE. The values 1 and 0 are well known and more concrete than TRUE and FALSE. Furthermore, this representation is commonly used in Computer Science for Boolean values. This will prepare the students for this use.

Fig. 2
figure 2

The TrueBiters monsters representing the logical operators

Each type of monster that is representing a binary operator comes into two variants: one that spits out the 1-value and one that spits out the 0-value. In Fig. 3 the 1-variant and the 0-variant of the monster representing the AND-operator are shown. This reflects the fact that the application of a logical operator can result in TRUE or FALSE. In logic, the value of a logical operator depends on the input values and is defined by the truth table of the operator. As such, when reducing two bits into one, the player should not only know which type of monster to use (i.e., which logical operator), but also which variant. When a wrong monster variant is used, the turn is over. The player might also receive minus points, depending on the game mode. This will motivate the players to become very familiar with the truth tables of the logical operators, which is our goal.

Fig. 3
figure 3

The two variants of the monster representing the AND operator

There is also a monster that represent the NOT operator. The NOT is a unary operator that turns FALSE into TRUE and vice versa. Therefore, a NOT-monster only eats one bit, and spits out the opposite value. A player can use a NOT-monster to counter the progress of the opponent. A NOT-monster can be used to swap the value of one of the bits in the player’s initial list of bits (at the top of the pyramid). This operation will also invert the corresponding bit of the opponent. Therefore, if used well, this action may invalidate several of the reductions made by the opponent. For instance, the reduction of 1 and 1 by means of the 1-variant of the AND-monster will become invalid if one of these input bits is swapped into 0. However, as the application of this monster also effects the player’s own initial list of bits, s/he has to use this with care because it can also invalidate a number of the own reductions; it is only beneficial to use the NOT when the damage is bigger for the opponent. This requires the player to reason about the possible consequences on all operations already applied in the game.

The types of monsters, as well as the total number of monsters that a player has at her/his disposal depends on the difficulty level of the game. There are three levels: easy, medium, and hard. At most 6 binary monsters are visible at a turn. If no applicable monster is available, the player can use a turn to discard a monster (i.e., throw it in the bin) and (as long as there are still monsters available for the player) another one will become visible. There is no guarantee that the player can finish the game with the monsters received. For the difficulty levels medium and hard also a timer is used, meaning that the player must make a reduction within a given time limit. The feature of collecting points can be enabled or disabled.

Development

Initial version

The initial version of the game was developed in 2016. It imitated the original card game in a digital way: An Android tablet was used to render the board (i.e., the pyramids) and each player was using an Android smartphone that contained the player’s stack of cards. Each card contained a monster. Only one card was visible at any time. Figure 4 shows this set up. The player could inspect the cards by swiping left and right on the smartphone and could place a card on the board (tablet) by swiping the card up. The devices communicated through Bluetooth technology.

Fig. 4
figure 4

Sep up of the initial (first) version of the game

The game also allowed for self-training. In the self-training mode (single player) only one pyramid was shown on the tablet and only one smartphone was used.

Second version

Based on the first evaluation (see next section), a timer was added to limit the time taken by a player per turn, requiring the players to be more familiar with the truth tables. We also adapted this version to be usable in a larger-scale experiment. A logging mechanism was added to keep track of winning/losing and of the mistakes made by the players during gameplay.

Third version

The first two versions of the game required several devices to play: one tablet (for the board) and one to two smartphones depending on the number of players. Although using a tablet to hold the board initially seemed a good idea, it made it hard to roll out the game, because lots of devices were needed. Therefore, it was decided to drop the tablet. This required a new design in which the game board and the deck of cards had to be visualized on a relatively small screen. Moreover, the original version was only available for Android devices. In order to make the game available for multiple platforms, it was decided to re-develop the game using technology that would make it easier to also support other platforms/operating systems.

At that moment in time, “cross platform app development tools” were promoted for supporting the development of apps for different platforms and operating systems. These kinds of development environment allow for crafting different builds for different devices from a single code base. We selected CORDOVA (“CORDOVA,”, 2015) as development environment. We re-developed TrueBiters from scratch creating first the single player mode and with the intention to add the two-player mode afterwards. The “single player” version was available for Android, iOS, and as a Web application (see Fig. 5).

Fig. 5
figure 5

Version 3 - single player screen

The concept of card was dropped to save space. The pyramid and all six monsters available for a turn were showed at the same time on the screen. The available monsters were shown, either at the right side of the screen or at the bottom of the screen (depending on the screen size). This removed the need for swiping left and right to inspect the available cards. Dragging was now used to place a monster on a pyramid.

However, crafting the two-player version using Bluetooth as the communication protocol between the devices proved to be troublesome with CORDOVA.

Fourth version

We chose to replace CORDOVA with Unity3D (“Unity3D,”, 2019), a widely used game engine that supports a wide array of devices and operating systems out of the box. In autumn 2017, this resulted in a new version with both single- and two-player mode, available for Android and IOS and as a Web-application. The single-player mode is now using one device, while the two-player mode can be played with one or two devices (see Fig. 6 for an illustration of this version). When only one device is used, the players have to share the screen. In this case, the players can see each other’s available monsters but not at the same time.

Fig. 6
figure 6

Version 4 - two-players mode using one device

Fifth version

In the earlier versions, we used Bluetooth for the communication between devices because we did not want to rely on the availability of Internet. However, as Internet became widely available over the years, either through Wi-Fi or through mobile data, and using Bluetooth was still problematic when communicating between devices across different operating systems, we decided to replace Bluetooth by communication over the Internet. Moreover, when only one device is used (either in the single-player mode or the two-player mode) playing without Internet connection is possible.

Furthermore, based on the feedback received during the evaluation of the fourth version, the game was further improved: the distribution of operators for the different levels were revised; the ‘look and feel’ was improved and new graphics specifically designed for the game were used; difficulty levels were added with the possibility to enable or disable collecting points.

In the new setup, by default the player sees his own pyramid but can also inspect the pyramid of the opponent by swiping (see Fig. 7).

Fig. 7
figure 7

Version 5 – two-players mode using one device

TrueBiters is now available for free, respectively from the Apple Store and Google Play. A small explanation and tutorial is available on its website (“TrueBiters,”, 2018)

Evaluations

Each version of the game was evaluated for its game experience using the questionnaire GEQ (IJsselsteijn, De Kort, Poels, Jurgelionis, & Bellotti, 2007). We will not discuss these evaluations in detail, as their main purpose was to improve the usability and game experience of the users. Instead, we will focus on the evaluations related to the learning effect of the game and the suitability of the game for the target audience.

The main aim for developing the game was to improve the knowledge of our students about the truth tables of propositional logic. However, we also took this opportunity to study the impact of using a player-centered approach in the design of a learning game. A player-centered design is a design paradigm in which different aspects of the game are tailored to suit one or more groups of players who can be clustered based on a certain characteristic (e.g., playing style, personality, type of intelligence) (Sajjadi, 2017). In particular, we wanted to investigate whether taking into consideration individual characteristics of players according to the theory of Multiple Intelligences (MI) (Gardner, 2011) could be beneficial for the game as well as for the learning experience. The theory of MI states that all human beings have eight distinct intellectual capabilities (called intelligence dimensions), but the strength of these intellectual capabilities varies from person to person. For instance, somebody can be strong on the logical-mathematical dimension but weaker on the verbal-linguistic dimension. According to this theory, these intellectual strengths influence the way people learn. Although this theory explicitly deals with individual differences in terms of learning capabilities, research related to educational game design that studied its merits for personalization, was rather scarce. We recognized in (Sajjadi, Vlieghe, & De Troyer, 2017) that there are controversies about this theory. Opponents (e.g., Waterhouse, 2006a, 2006b) criticize the lack of strong empirical evidence for the existence of the dimensions, while proponents (e.g., Chen, 2004) argue that the value of such a theory is rather in the contributions it could make to the field. We agree with the proponents and consider the theory as a possible useful mechanism for personalization and therefore we investigated whether this theory could be used in understanding players’ behavior and motivation for using a game (Sajjadi, Vlieghe, & De Troyer, 2016c), and to adapt games accordingly (Sajjadi et al., 2017). Therefore, in a number of our evaluations we have also measured the strengths of the different “intelligences” (as defined by MI) of the participants by using the MIPQ questionnaire (Tirri & Nokelainen, 2011).

Pilot evaluation

Before rolling out the game in our logic course, we first performed a pilot study to investigate the potential learning outcome of the game, as well as the game experience. This was done with the first version of the game. For this evaluation, we invited the students who failed their logic exam in the first session of the academic year 2015–2016 to participate. Although the students were incentivized through a gift and were assured complete anonymity, only four students (out of 38; all male) volunteered to participate. Before playing the game, we measured the participants’ levels of MI intelligences. To measure learning outcome, we used a pre and post-test containing questions that required the use of the standard truth tables to solve them. During the course, the students did similar exercises. After the pre-test, the participants were given 10 min for self-training with the game. Next, they played the game in the form of a tournament. Afterwards, the participants did the post-test and filled out the GEQ questionnaire.

Furthermore, in addition to these four participants, seven more students (six male and one female) played the game. These participants were students from the 2nd year Bachelor in Computer Science who had already passed the logic exam; so measuring their learning outcome was not considered. For these participants, the focus was purely on evaluating the relationship between their game experience and their dominant intelligences.

The results of this evaluations showed that the players who had the logical-mathematical intelligence as one of their dominant intelligences were experiencing challenge, more competence, immersion and flow. However, they were slightly feeling more tension, slightly less positive affect, and more negative affect. This could be due to the fact that the interaction modality of this version of the game was inherently kinesthetic, i.e., gesture-based. Although based on low numbers, the fact that players with the bodily-kinesthetic intelligence as one of their dominant intelligences (4 players, 36%) were experiencing less tension and less negative affect could corroborate this.

The results also revealed that when considering the dominant intelligences of the players, those who had the logical-mathematical intelligence as one of their dominant intelligences learned most from the game. In order to understand this, we analyzed the used game mechanics and compared them with the evidence-based mappings given between game mechanics and the MI intelligences in (Sajjadi, Vlieghe, & De Troyer, 2016b). Our analysis showed that the key mechanics of TrueBiters are strategizing and logical thinking, and according to these mappings those game mechanics suit the logical-mathematical intelligence well. More details of this pilot study can be found in (Sajjadi, El Sayed, & De Troyer, 2016).

Second evaluation

The next evaluation was done with the second version of the game. It took place in the academic year 2016–2017. For this evaluation, the students following the logic course were divided into a control and an experimental group. The control group (27 participants) received the classical exercises on truth tables under the supervision of a teaching assistant, while the experimental group (23 participants) played the game after a short briefing session. The groups were uniformly composed based on the results of the mathematics test that our students have to do at the start of the academic year.

Both groups first performed, under the guidance of the teaching assistant, some classical exercises on truth tables, as the goal of the game is not to learn the truth tables but to practice them. At the start of the experiment, they completed a pre-test and at the end a similar post-test to verify the learning effect. We also observed the behavior of the participants of the experimental group while playing the game.

Based on the data from the pre and post-test, our first hypothesis: “participants who played the game will make less mistakes in using the logical operators than those who did not” could not be accepted. We actually saw in the post-test that the students in the experimental group made significantly more mistakes in general (p = .004, mean for control group = 1.30, mean for experimental group = 3.65), as well as with the IMPLY operator (p = .000, mean for control group = 0.22, mean for experimental group = 2.57).

Because we were surprised by this result, we investigated this further and we reached the following explanation. At each turn during the game, the player had several operators to choose from. Based on our observations during the experiment, almost all participants tended to ignore the IMPLY operator. Presumably, this is because students perceive this operator as the most difficult one to understand, remember, and apply. Therefore, the students in the experimental group did not adequately train with this operator, while in the exercises (made by the control group) there is an emphasis on the use of this operator. As the difference between the numbers of mistakes was only significant with respect to this operator, this is likely the cause for the observed difference in the total number of mistakes made. One way to remedy this situation is to enforce equal use of all operators in the game or even force to practice the IMPLY operator more. For this purpose, in the third version of TrueBiters, we adjusted the distribution of the operators in the different levels of the game.

After the final exam (January 2017), the pass rate was compared with those of previous years. The pass rate was significantly better compared to previous years (54% versus 30%). However, it was too premature to draw conclusions from this, as we could not dismiss the possible presence of other factors or just chance.

Third evaluation

As we already performed an experiment related to comparing learning outcome, we decide to set up another type of experiment in the following academic year (2017–2018). With this evaluation, we mainly wanted to investigate the impact on motivation, but also the overall learning effect was evaluated.

In that year, the students of the logic course were introduced to the game during a class session. They received a 15 min briefing and then played the game for 20 min. Afterwards, students were asked to practice with the game voluntarily while studying for the course. For this the Android version and the Web version were available (iOS was still under development at that time). Furthermore, a (Dutch) manual and tutorial clip were provided. On the mid-semester trial exam of the course, students were asked if they had continued playing the game or not. If so, we asked for how long; if not, we asked why not (using an open question). Students who had played the game were also given additional questions to measure their game experience. This trial exam was a written exam about propositional logic. We explicitly included additional exercises that were only testing the knowledge of the truth tables to be able to investigate a possible difference in learning effect between groups. The final exam at the end of the semester was as usual and covered all the subjects of the course, i.e., also predicate logic and lambda calculus. We analyzed the results of the trial exam and the final exam using a t-test, considering the students that continued to play the game as the experimental group and the others as the control group. Slightly less than half of the students (30 out of 63) had played the game after the introduction. They played the game between 15 to 150 min, mostly in the self-training mode. Our quantitative analysis showed that these students obtained clearly higher marks on the final exam, i.e., there was a significant difference between the experiment (Mean = 13.70, SD = 3.4) and control (Mean = 10.81, SD = 4.75) groups (t(59) = 2.724, p < .01). However the difference was only significant for the final exam, and not for the trial exam. A possible explanation for this difference could be that a better mastering of the basic concepts of logic, such as the truth tables, only pay off significantly when the subject matter becomes more complicated (as in predicate logic). Only 4 out of the 30 that practiced with the game actually failed the exam (13.3%), while this was 45.5% (15 out of 33) for the ones that did not practice with the game. Again, the overall pass rate was better than in the years before we utilized the game. That year we had an overall pass rate of 53.8%.

There was in general no correlation between the time spent on playing the game and the results of the trial exam, except that those who played most (4 students played between 120 to 150 min) failed the trial exam. We will discuss this later.

Reasons given for not having played the game, based on our qualitative analysis, were: not enough time (7x); felt no need to practice (6x); no suitable device (5x); preference for the classical exercises (3x); forgot to do (3x); missed the introduction session about the game (3x); preference to use the time to study (1x); feeling that it was not helpful and boring (1x).

Fourth evaluation

The game was also used in a workshop for pupils of the 3rd grade of secondary education (16 to 18 years old) (November 2017). The main goal of this evaluation was to investigate whether the game would also be usable in secondary education for teachers willing to take up logic as one of the elective subjects. (Note that logic is not a mandatory subject in our secondary educational system.) The participants, 21 pupils, first received an introduction to propositional logic (35 min), then they practiced individually with the game for 10 min, and next they played against each other. The workshop lasted 90 min. At the end of the workshop, the pupils were asked to fill out an online questionnaire about their background (age, educational program, game experience, etc.), issues experienced with the game, their perceived learning experience, and their general opinion about the game.

Most pupils were following a STEM-oriented education (Science, Technology, Engineering, and Mathematics). Although they reported some usability issues, they were positive and could see the potential of the game in the context of learning propositional logic. On the open question about what they found difficult, the use of the NOT monster was mentioned the most (3x). Applying the NOT monster indeed requires good reasoning skills and understanding of the truth tables, which may have been too difficult after the short introduction received and practicing time given. Suggestions for improvement were mainly about the look and feel of the game and the interaction speed.

Discussion

The discussion is divided into three parts. First, we discuss how well the game is adapted to the target audience. Next, we discuss the learning potential of the game, and finally, we discuss issues related to the use of the game as a didactical tool.

Adapted to the target audience

The nature of the courses in the Bachelor of Computer Science requires good logical reasoning skills and mathematical abilities. This claim is supported by a good correlation between the results of the mathematics test that our students do at the start of the first year and succeeding in the program. This is also communicated in this way to potential students. For students with a weak mathematical background some catch-up courses are provided. Although we do not want to exclude any student in advance, we therefore decided to tailor the game design towards the logical-mathematical intelligence of MI, as the majority of our students will be leaning towards this intelligence. Based on the recommendations given by the Game Mechanics and Multiple Intelligence Tool (Sajjadi, 2017), mostly game mechanics that are suitable for this dimension, i.e., logical thinking, strategizing, modifier, quick feedback, and points were applied. Although, the work recommends that the mechanics timed, disincentives, choosing, and placing should not be used for this MI dimension, we did so for the following reasons. Timed was added because different participants in the experiments mentioned that a timer would make the game more challenging. However, to accommodate players that do not like a timer, it is only used in two out of the three difficulty levels. Choosing and placing are core mechanics of the game, and therefore it would be difficult to avoid them without changing the principles of the gameplay. We also decided to use disincentive (losing a turn in case of applying a bad operator or losing points). In the single-player version, it is mainly used as a feedback mechanism to stimulate the player to perform better next time, while in the two-player version it is intended to introduce competition. The pilot evaluation showed that indeed logically-mathematically intelligent players had better game experience.

Learning outcome

There was no significant difference in learning outcome about the truth tables between the control group and the experimental group in the second evaluation. This allows us to conclude that playing the game had the same learning effect as the classical exercise session under the supervision of the teaching assistant. This justifies the use of the game, because it means that a part of the traditional exercise sessions can be replaced by playing the game, in this way freeing up time for other more complex topics and allowing the students to practice the truth tables autonomously, more intensively, and at their own pace. Also in the third evaluation, we noticed a better passing rate for those students that practice with the game.

In addition, we also have seen a major increase in the passing rates for the course after the introduction of the game. Of course, after 2 years, it is still too early to substantially attribute this to the game, but this positive effect can also not be ignored.

Use as didactical tool

The low participation rate in the first experiment, as well as the experiment done for the third evaluation has taught us that only introducing the game and leaving the students free to use it, is not enough to incentivize everyone to play the game. The reasons given by the students for not playing the game were mostly reflecting a lack of motivation. This is conflicting with an argument often given for educational games, i.e., they can stimulate the motivation for learning by making use of the intrinsic motivation of people to play. Apparently, this was not the case for a portion of our students. Possible explanations are:

1) Some students simply do not like to play games and may therefore be reluctant to play. We hope to remedy this by providing them more time in the class to try out the game and in this way convince them of the potential of the game.

2) Some of the students were not convinced that playing the game would have an added value. These students can possibly be convinced of the added value by seeing positive testimonials of other players.

3) Maybe the game was too boring or not appealing enough to some students. The game experience evaluation showed that most participants were positive about the game, but that it could be more challenging. This means that there is room for improvement. For this reason, we intend to organize competitions between the students (e.g., between teams) and reward them in some way. This could also remedy the fact that playing the more challenging version, i.e., the two-player version requires the students to actively look for a partner. Forming teams during the exercise sessions could already help in this respect, while introducing competition could stimulate students to keep playing for some time.

4) Perhaps, for some students the aversion for logic is far greater than their intrinsic motivation to play a game. Gently pushing the students to play the game in combination with competitions and rewards could be a solution for this.

Furthermore, we noticed that the students who played most (4 students played more than 120 min) actually failed the mid-semester trial exam. So, we see the potential danger that some students keep playing too long and do not spend enough time on studying the content of the course. However, not offering the game anymore will not change this. Students with an intrinsic motivation to play that is far greater than their intrinsic motivation to study will always spend a considerable amount of time on playing.

There is also a danger that students will only rely on what they have learned from the game and not practice anymore with the additional conventional methods. This can be avoided by clearly communicating the complementary role of the game to the students.

Conclusions

This paper presented the development and evaluations of TrueBiters, an educational game to practice the truth tables of propositional logic. The game was tailored for logically-mathematically intelligent students. It was developed in an iterative way. Each version was improving on the previous one based on the results obtained from the evaluations. Different approaches were used to evaluate the game and two different aspects were considered: the game experience and the learning outcome. Based on the results of the evaluations we can conclude that from a learning perspective, the dominant intelligences of the players have a role in the effectiveness of this learning game: the game is most effective for players with the logical-mathematical intelligence as one of their dominant intelligences. In this way, the game is well suited for its target audience.

From a learning perspective, we can conclude from the evaluations that the game has the same learning effect as the classical exercise session under the supervision of the teaching assistant, which means that it can be used as a replacement for this session.

A striking observation during some of the experiments was the lack of motivation of some students to play the game, and specifically of the students that could benefit most from more practicing. This has taught us that it is not sufficient to only provide the educational game but that the way the game is embedded into the course and communicated to the students is also of primordial importance. This is a confirmation of the Behavior Model of Fogg (Fogg, 2009), which states that three elements must converge at the same moment for a behavior to occur: motivation, ability and a trigger. In our case, that means that we must pay more attention on creating the right motivation and also provide the right trigger at the right moment (we suppose that for most of the students the level of their ability is in principle sufficient).

Finally, we also learned that the look and feel of a game is very important for the acceptance. Youngsters are used to great looking games and also expect this for educational games. For this reason, we finally involved a graphic designer with game development experience to improve the look and feel of our game.

Availability of data and materials

The anonymized user data used for the analysis of the studies can be made available from the corresponding author on reasonable request.

Abbreviations

GEQ:

Game Experience Questionnaire

MI:

Multiple Intelligences

MIPQ:

Multiple Intelligences Profiling Questionnaire

STEM:

Science, Technology, Engineering, and Mathematics

References

  • Allen, L. (1965). Toward autotelic learning of mathematical logic by the WFF ‘N PROOF games. Monographs of the Society for Research in Child Development, 30(1), 29–41. https://doi.org/10.2307/1165706.

    Article  Google Scholar 

  • bOOLeO. (n.d.). Retrieved December 16, 2018, from https://boardgamegeek.com/boardgame/40943/booleo

  • Burbules, N. C., & Reese, P. (1984). Teaching logic to children: An exploratory study of “Rocky’s Boots.”. Assessing the Cognitive Consequences of Computer Environments for Learning (ACCCEL).

  • Chen, J. (2004). Theory of multiple intelligences: Is it a scientific theory ? The Teachers College Record, 106(1), 17–23. https://doi.org/10.1111/j.1467-9620.2004.00313.x.

    Article  Google Scholar 

  • CORDOVA. (2015). Retrieved August 16, 2019, from https://cordova.apache.org/

  • Fogg, B. (2009). A behavior model for persuasive design. In Proceedings of the 4th International Conference on Persuasive Technology – Persuasive ‘09 (p. 40). ACM Retrieved from http://portal.acm.org/citation.cfm?doid=1541948.1541999.

  • Gardner, H. (2011). Frames of mind: The theory of multiple intelligences. Basic Books.

  • Hicks, D. J., & Milanese, J. (2015). The logic game: A two-player game of propositional logic. Teaching Philosophy, 38(1), 77–93. https://doi.org/10.5840/teachphil20151731.

    Article  Google Scholar 

  • IJsselsteijn, W., De Kort, Y., Poels, K., Jurgelionis, A., & Bellotti, F. (2007). Characterising and measuring user experiences in digital games. In R. Bernhaupt & M. Tscheligi (Eds.), International conference on advances in computer entertainment technology (Vol. 620, pp. 1–4). ACM.

  • LogicPalet. (2019). Retrieved December 17, 2018, from https://logicpaletwebapp.azurewebsites.net/

  • Øhrstrøm, P., Sandborg-Petersen, U., Thorvaldsen, S., & Ploug, T. (2013). Teaching logic through web-based and gamified quizzing of formal arguments. In D. Hernández-Leo, T. Ley, R. Klamma, & A. Harrer (Eds.), Scaling up learning for sustained impact. EC-TEL 2013. Lecture notes in computer science (Vol. 8095, pp. 410–423). Berlin: Springer.

    Google Scholar 

  • Sajjadi, P. (2017). Individualizing learning games: Incorporating the theory of multiple intelligences in player-centered game design. Vrije Universiteit Brussel.

  • Sajjadi, P. (2017). Game mechanics and multiple intelligence tool. Retrieved from https://wise.vub.ac.be/dpl/

  • Sajjadi, P., El Sayed, E., & De Troyer, O. (2016). On the impact of the dominant intelligences of players on learning outcome and game experience in educational games: The TrueBiters case. In R. Bottino, J. Jeuring, & R. C. Veltkamp (Eds.), Games and learning alliance: 5th international conference, GALA 2016, Utrecht, The Netherlands, December 5–7, 2016, Proceedings (pp. 221–231). Springer International Publishing. https://doi.org/10.1007/978-3-319-50182-6_20.

    Google Scholar 

  • Sajjadi, P., Vlieghe, J., & De Troyer, O. (2016b). Evidence-based mapping between the theory of multiple intelligences and game mechanics for the purpose of player-centered serious game design. In Games and virtual worlds for serious applications (VS-Games), 2016 8th international conference on (pp. 1–8). IEEE. https://doi.org/10.1109/VS-GAMES.2016.7590348.

  • Sajjadi, P., Vlieghe, J., & De Troyer, O. (2016c). Relation between multiple intelligences and game preferences: An evidence-based approach. In Proceedings of the European conference on games-based learning (pp. 565–574). Academic Conference and Publishing International Limited.

  • Sajjadi, P., Vlieghe, J., & De Troyer, O. (2017). Exploring the relation between the theory of multiple intelligences and games for the purpose of player-centred game design. Electronic Journal of E-Learning, 15(4).

  • Schäfer, A., Holz, J., Leonhardt, T., Schroeder, U., Brauner, P., & Ziefle, M. (2013). From boring to scoring – A collaborative serious game for learning and practicing mathematical logic for computer science education. Computer Science Education, 23(2), 87–111 https://doi.org/10.1080/08993408.2013.778040.

    Article  Google Scholar 

  • Shiver, A. (2013). Propositional logic card games. Teaching Philosophy, 36(1), 51–58.

    Article  Google Scholar 

  • Tirri, K., & Nokelainen, P. (2011). Multiple intelligences profiling questionnaire. In Measuring multiple intelligences and moral sensitivities in education. Moral development and citizenship education (pp. 1–13). SensePublishers.

  • TrueBiters. (2018). Retrieved August 19, 2019, from https://wise.vub.ac.be/project/truebiters

  • Unity3D. (2019). Retrieved August 16, 2019, from https://unity3d.com/

  • Waraich, A. (2004). Using narrative as a motivating device to teach binary arithmetic and logic gates. In ACM SIGCSE bulletin (Vol. 36, pp. 97–101). https://doi.org/10.1145/1026487.1008024.

    Chapter  Google Scholar 

  • Waterhouse, L. (2006a). Inadequate evidence for multiple intelligences, mozart effect, and emotional intelligence theories. Journal of Educational Psychologist, 41(4), 247–255.

    Article  Google Scholar 

  • Waterhouse, L. (2006b). Multiple intelligences, the Mozart effect, and emotional intelligence: A critical review. Educational Psychologist, 41(4), 207–225.

    Article  Google Scholar 

Download references

Acknowledgements

We thank all students and pupils involved in the evaluations, as well as Eman El Sayed for the implementation of the first two versions of the game and performing the pilot experiment.

Funding

Part of this work was financially supported by the Vrije Universiteit Brussel as an education innovation project.

Author information

Authors and Affiliations

Authors

Contributions

PS and ODT developed the first three versions of the game and performed the first two evaluations. RL and ODT developed the fourth and fifth version of the game and performed the third and fourth evaluation; PS did the analysis of the third evaluation. ODT and PS contributed the most to the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Olga De Troyer.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

De Troyer, O., Lindberg, R. & Sajjadi, P. TrueBiters, an educational game to practice the truth tables of propositional logic: Development, evaluation, and lessons learned. Smart Learn. Environ. 6, 27 (2019). https://doi.org/10.1186/s40561-019-0105-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40561-019-0105-2

Keywords