Skip to main content

Technology-enhanced assessment visualization for smart learning environments

Abstract

In Smart Learning Environments, students need to be aware of their academic performance so they can self-regulate their learning process. Likewise, the teaching process can also be improved if instructors are able to supervise the progress of students, both individually and globally, and anticipate proper pedagogical strategies. Thus, effective Student Models, capable of identifying and predicting the level of knowledge of students, are a key requirement in modern educational systems. In this article, we revisit OSM-V, an Open Student Model with Information Visualization capabilities that allow students and instructors to assess performance-related information in educational systems. We detail its architecture and how it was integrated into Classroom eXperience, a Smart Learning Environment with multimedia capture capabilities. We also present extended results from experiments that evaluate both the perception of utility and behavioral changes in students who used OSM-V, showing that it can positively impact students’ learning and positively influence their study habits.

Introduction

Intelligent interfaces can enable learning to be clearer and easier, fostering interaction according to the cognitive abilities of those directly involved in the process (Lindstaedt et al., 2009). A myriad of computational resources can be used to support pedagogical strategies that make teaching and study activities simpler, more dynamic and attractive to students. A major challenge for such scenario relies on the proper identification of the capabilities and limitations of students (Greiff et al., 2016, Nguyen et al., 2017). According to Brusilovsky (2001), the teaching process would have a better efficacy if it were possible to identify the real state of knowledge of each student individually, allowing instructors to address the individual limitations of each student.

Over years, Student Models (SM) have been used to map the cognitive characteristics of students (Self 1990). This method has proved to be effective in many situations (Mitrovic and Thomson 2009; Li et al. 2011), thus allowing automated systems to guide new actions to intervene in the teaching process of each student. However, neither instructors nor students often have the ability to view the information proposed by this model, without contributing to the process of personalizing pedagogical strategies.

In order to assist instructors and students in the process of knowledge identification, Open Student Models (OSMs) have been proposed as tools to “outsource” information manipulated by SMs (Hartley and Mitrovic 2002; Mitrovic and Martin 2002; Mabbott and Bull 2006). The open models seek to expand the capacity of traditional SMs, allowing information, which was previously treated only internally by the system, can now be made available to all those involved in the teaching process. This allows for more interaction between instructors and students in how new attitudes are taken to improve teaching. OSMs have gained popularity due to their strong psychic-pedagogical foundation and also because they present positive results from the educational point of view, such as metacognitive support (Bull and Wasson 2016), persuasion (Ginon et al., 2016) and self-regulated learning (Long and Aleven, 2016).

The integration of Information Visualization tools into the context of educational content adaptation provides new characteristics to SMs, thus allowing the emergence of open and intelligent models, where data can be inferred and stored to aid the individualized adaptation of educational content. In such scenario, this article revisits OSM-V, a technology-enhanced assessment visualization OSM for Smart Learning Environments. OSM-V acts as an intelligent visualization tool, combining probabilistic information — through Bayesian Networks — with semantic information — through the use of ontologies.We carry out extended experiments with regular classes of face-to-face courses in order to evaluate the impact of the use of the tool both on students’ satisfaction and study behavior.

The remainder of this article is structured as follows: the “Research background” section describe the smart learning environment used as a case study in this project, Classroom eXperience, and present some considerations regarding the use of information visualization concepts in education; the “Related work” section presents a survey of the state of the art related to this study; the “OSM-V” section describes the OSM-V architecture, detailing its modules, repositories and communication channels, as well as its integration procedure; the “Experiments and results” section presents two experiments regarding the impact of our proposal; and, finally, the “Conclusions” section presents our final remarks and future work.

Research background

Classroom eXperience

Classroom eXperience (CX) is a smart learning environment with content recommendation and personalization capabilities (Araújo et al., 2013; Dorça et al., 2016; Ferreira et al., 2017a), semantic (Ferreira et al., 2016), social and collaborative features (Araújo et al., 2017). It comprises a multimedia capture platform for automatically recording lectures in a classroom equipped with ubiquitous computational devices, such as electronic whiteboards, microphones, video cameras, and multimedia projectors – an infrastructure common today in many schools and universities.

CX was developed for capturing, storing, synchronizing, and making different medias available to students by means of hypermedia documents generated in different presentation formats. CX has been in use since 2012 at a few Colleges and Universities in Brazil. Currently, the environment has approximately 850 registered users and content of about 75 courses have been captured using the platform (Fig. 1).

Fig. 1
figure 1

CX access interface. List of courses and captured lectures in the CX Web access interface

Smart Learning Environments, such as CX, have the potential to generate a huge amount of data that is often not fully understandable to human users, thus requiring additional tools to be properly visualized and useful.

Information visualization

The synergy between the areas of Computer Graphics, Human-Computer Interaction and Data Mining leverages research in Information Visualization, aimed at presenting information graphically and so that the user can use their visual perception for better analyze and understand them. It is a topic characterized by the need to create mechanisms to transform data into information, and whose representation must express important properties of the data and how different items are related to each other.

Educational environments are one among many of the possible application scenarios for information visualization techniques. In such environments, huge amounts of data are generated. In most cases, information is inferred and stored with the purpose of supporting the individualized adaptation of content. This capacity is often enabled by means of some SM, which is responsible for storing relevant information for the individualized recommendation and content personalization process concerning the student. Information such as knowledge level, interests, preferences and objectives is stored over the whole student learning process (Clemente et al., 2011). A large part of the information stored in the SM is automatically inferred during the interactions between the student and the system (Bull and Wasson, 2016).

Related work

Traditionally, data processed by the SM is “closed” for users, providing information only for the system itself, for personalization purposes. However, several research projects have proposed tools to “outsource” this information, i.e. to leave the data “open” to the users involved in the educational process (Long and Aleven 2013; Bull and Kay 2013). This ability to “open” the information inferred by the SM is a key characteristic of OSMs, which explore the area of Information Visualization to produce tools that can provide mechanisms for students and instructors to visualize, explore and even modify the way knowledge is created and processed by the SM.

Several approaches are proposed with the purpose of assisting in the personalization, integration and visualization of educational resources. Bull and Kay (2013) present an OSM with capacity to treat and analyze the process of metacognition. Other studies, in turn, seek to verify the impact of OSMs on student engagement (Hsiao et al., 2013) and questions of self-regulation and self-assessment (Mitrovic and Martin 2007; Guerra et al. 2016). There are also those who check questions about learning improvement (Bull and Wasson 2016). Some studies differ in the way the data is presented to the target users, for example: graphs (Jacovina et al. 2015), skillometers (Mitrovic and Martin 2007) or knowledge maps (Lindstaedt et al. 2009). By visualizing (and sometimes interacting with) their own learning or performance representation, students have a powerful feedback tool for managing their expertise (Guerra et al. 2016). Ilves et al. (2018) studied how textual and radar visualizations could be used to support students’ self-regulation in online learning. ProTuS (Vesin et al. 2018) comprises a interactive learning analytics component which allow students to compare grades, activities and trajectories of other students who are enrolled in the same course.

Most of these studies, however, do not exploit intelligent techniques for processing and structuring the processed information, allowing only the presentation of the content. A key benefit of our approach lies precisely in the fact that it explores intelligent strategies to deal with uncertainties, through joint probability structures using Bayesian Networks, and through semantic and ontological resources for the proper representation and processing of inferences (Ferreira et al. 2016; 2017b). Based on this, our approach presents an important advance for the state of the art, allowing the definition of an architecture based on intelligent tools that explores the capabilities of OSMs. We also explore different types of visualization tools.

Regarding analysis of students’ online behavior, one can find models that rely on data mining techniques to classify, predict, or group information (Harris and Kumar 2018; De Los Reyes et al. 2019). StudentViz, for instance, is a platform for visualizing students’ collaboration patterns (Becheru et al. 2018). An important differential of our approach refers to its ability to use a data mining mechanism (clustering algorithms) applied to students enrolled in face-to-face courses in order to establish a behavior-based guideline. The proposal analyzes whether students with different online behavior patterns (i.e., based on the interactions made by students in the virtual learning environment) also present significant difference in performance. Research available in the literature that addresses such an analysis is rare.

OSM-V: an open student model for assessment visualization

This article extends the approach proposed in Ferreira et al., (2016; 2017a, b, 2019). OSM-V allow students and instructors to assess performance-related information in educational systems. There are several possibilities for interaction. Instructors are able to supervise the evolution of students, both individually and as a group, during a course. The tool also allows instructors to view, in a grouped way, the students who have the best and worst performances. For students, it is possible to observe the content in which they have more difficulties and compare their performance to that of the class. It is also allowed to keep abreast of their academic evolution during their studies. It is important to note that all forms of visualization can be customized and adapted to the user’s profile.

Our model aims at Ferreira et al., (2017a; 2019):

  • Allowing instructors to monitor students’ abilities, online behavior and limitations over time;

  • Allowing instructors to detect which students are most likely to succeed and fail;

  • Allowing students to know their main abilities and limitations in relation to a given subject;

  • Allowing students to adjust their studies to prioritize subjects in which they have more difficulty;

  • Allowing students to track their performance over time.

The proposed model is not limited to producing information that guides instructors and students in order to only maximize success and minimize failure, but it is rather an important tool to help to identify the real abilities and limitations of the subjects involved in the learning process. Its underlying OSM provides sufficient resources capable of abstracting the generation of different forms of visualization. For the experiments proposed, three forms of charts were implemented: line, bar and radar. Such views can be made available to different users of the environment. The use of the open model enables better performance for students, since they are always focused on their limitations, thus improving the effectiveness of the knowledge acquisition process.

OSM-V enables students and instructors to interact directly with the information processed and inferred by the model. For students, specific features allow the visualization of the level of performance at different points in the system. The model also provides subsidies so that the student can compare his/her performance to other students enrolled in the same course. For instructors, visualization tools assist with the monitoring of students’ learning process. Instructors can supervise individual students and/or the whole class, checking, for example, subjects where the class is presenting difficulties or the students with best/worst performances on each subject.

Overall architecture

The OSM-V architecture consists of four modules: Probabilistic Module, Semantic Module, Activity Management Module and Visualization Module. These modules communicate through message exchange using a structured protocol based on JSON (https://www.json.org/). Figure 2 presents the OSM-V architecture.

Fig. 2
figure 2

OSM-V architecture. Overview of OSM-V architecture, consisting of four modules: Probabilistic Module, Semantic Module, Activity Management Module and Visualization Module (source: (Ferreira et al. 2019))

The information inference process begins when the Probabilistic Module receives information from the instructor interface (A) and obtains the information about available evaluative instruments in the question repository. After this step, the information handled by this module is forwarded to the Semantic Module (B). The Semantic Module is responsible for handling inferences based on SWRL (Semantic Web Rule Language) rules available in the repository. For these inferences, the Activity Management Module informs the activities carried out by the students (C) from the information about their online behavior when using the system (D). After the entire inference process, this information is sent to the last module (E), responsible for presenting the information, both for students (F) and for instructors (G).

Integration to classroom eXperience

OSM-V was integrated to the previously presented CX platform in order to improve student engagement and provide a more interactive environment, so that both students and instructors could follow more closely the evolution of the learning process.

Initially, a mechanism was created that would allow instructors to associate a Bayesian Network (BN) with a subject. The model is not limited to a particular domain of knowledge, as it is possible to add as many disciplines as needed, and each with a specific network. For this case study we implemented a mechanism that interprets BN in GeNIe Network format (.xdsl)Footnote 1. To work with BNs, we used the library SMILE EngineFootnote 2, the main library available in the literature for handling RBs. It provides a platform for inference of graphical models, influence diagrams and structural equation models.

An evaluation tool based on multiple choice was implemented through quizzes that are registered by instructors in chosen points of the lecture. Figure 3 shows the screenshot for registering a quiz in the CX environment. During this process, the instructor defines the quiz text (a), the associated BN topics (b), and the answer alternatives (c), including selecting the correct one.

Fig. 3
figure 3

Screenshot for registering a quiz in the CX environment. The instructor defines the quiz text (a), the associated BN topics (b), and the answer alternatives (c), including selecting the correct one

When reviewing the lecture content for study, students can try to answer the quizzes registered by the instructor. Equation 1 quantify the score of the student in each quiz based on how many attempts he/she has made until the right answer that defines his/her level of knowledge on the subjects associated in the BN.

$$\begin{array}{@{}rcl@{}} K = M / (Q - 1) * (Q - N) \end{array} $$
(1)

Where:

  • K is the probability of knowledge;

  • M is the maximum probability of knowledge;

  • Q is the number of alternatives to the quiz;

  • N is the attempt in which the student succeeded.

It can be noted, in Eq. 1, that the probability of knowledge (K) is related directly with the number of attempts the student used to succeed answering the question (N). For instance, the evaluative instrument presented in Figure 4 has four alternatives (Q), and the M value for K, defined by the model, is 0.9. So, if the student selects the right alternative on the first try, the K value will be 0.9. On the second try, the K value will be 0.6. On the third, 0.3. And, on the fourth try, K will be 0.1. In this last case, it is noted that the student does not acquire 0.0 because, as a model determination, the value for K always range from 0.1 to 0.9.

It is noteworthy that the proposed approach does not restrict the type of evaluation instrument to be used. For this case, quizzes were used, however, other instruments could be implemented. It is also important to note that the calculation for probability of knowledge can be defined in the implementation.

With the use of BNs and evaluative instruments, it is already possible to make inferences about the student’s level of knowledge. From the moment students enroll in the course, an abstract network is created for each student, which will represent his/her overall knowledge. This network is updated according to each interaction with the evaluation instruments. To know the probability of knowledge of a particular student, one can just check the value represented in the node of his/her abstract network. Thus, these visualization tools incorporate the OSM characteristics to the system. Figure 4 presents the visualization interface available for (a) students and (b) instructors.

Fig. 4
figure 4

Visualization interface. OSM-V integrated to Classroom eXperience, visualization interface available for (a) students and (b) instructors (source: (Ferreira et al. 2019))

Since our approach includes low complexity algorithms, these visualizations are updated every time new interactions are gathered. So, after answering a quiz, for example, students can go to the visualization component and the graph will be already updated.

In addition to individual visualization, the instructor can verify the performance of a particular student compared the whole group, to identify those students with a better and worse performance numbers, subjects that causing greater difficulty in the class, among other information concerning the educational development of students.

Experiments and results

In order to analyze the impact caused by OSM-V in students’ satisfaction and behavior when using the system, two experiments were conducted to verify whether the fact that students were able to follow their development influenced positively or negatively in their behavior (motivation, competitiveness, interest in studies, among other factors).

Experiment 1

Over two semesters of 2017, we applied questionnaires to students from four classes of the Information Systems major at Federal University of Uberlândia, Brazil — two classes of Human-Computer Interaction and two classes of Computer Architecture and Organization. From those, we collected a total of 139 responses, suitable for the statistical analysis.

At the beginning of the course, the instructor presented the tool that would be used to support educational activities, demonstrating their functionalities and explaining how they would be used. The students, in turn, made use of the platform during the semester and, in the end, answered the questionnaire, which presented, in most questions, a seven-point Likert scale (strongly disagree to strongly agree). The questions were classified into three groups: the first one related to the perception of utility of the OSM-related functionalities; the second one to assess whether or not there was a change in the way the student studied; and a third set of questions to verify the satisfaction with the use of the visualization tool and which graphics allowed a better visualization for different situations.

It was possible to identify that, in all analyzed questions regarding perception of utility, there were higher concentrations of responses between the concordance values, as can be seen in Fig. 5. For Question 1.1, which considers the level of satisfaction on the functionality of performance visualization, it is possible to perceive a higher concentration of positive responses. The same interpretation can be obtained by analyzing Questions 1.2 and 1.3, which verify the usefulness of the quiz and gamification functions, respectively. When asked if they would like the platform to be used in other courses (Question 1.4), students’ responses were very positive (more than 80% agreement).

Fig. 5
figure 5

Perception of utility. Answers to the question about perception of utility when using the tool (source: (Ferreira et al. 2019))

In the graph of Fig. 6, we find a balance in motivation for study (Question 2.1), a higher concentration of students who agree that performance charts can influence their behavior change from the point of view of performance improvement (Question 2.2), and a greater tendency of disagreement for competitiveness (Question 2.3).

Fig. 6
figure 6

Change in online study behavior. Degree of agreement on the change in online study behavior (source: (Ferreira et al. 2019))

In general, it is understood that students understand the functionality of performance visualization more as an aid to identify their strengths and weaknesses in each course, helping them to significantly change their study behavior, and not necessarily as a competition tool that explores motivational issues.

Figure 7 displays the results of Question 3, which evaluates user satisfaction when using the visualization tool: 56% of the respondents replied that they liked the functionality; only 4% responded that they did not like it; and 40% responded that they did not realize the existence of the functionality integrated to the educational platform used – which can be interpreted as a indicator of transparency, once the performance visualization features were made as a small notification icon.

Fig. 7
figure 7

User satisfaction. Answers to the question about user satisfaction when using the visualization tool. (source: (Ferreira et al. 2019))

Another objective of the experiment was to evaluate which forms of visualization were preferred by the students (Questions 4 and 5). In this case, the preference was for the bar graphs, both for the individual and comparison views, as can be observed in Fig. 8.

Fig. 8
figure 8

Preferred forms of visualization. Answers to the question about preferred forms of visualization, considering line, bar and radar graphs (source: (Ferreira et al. 2019))

The reliability of the questionnaire responses was verified with the Cronbach’s Alpha test. This test aims to analyze the internal consistency of the answers based on the correlation between different items for the same scale. For the interpretation of Cronbach’s Alpha values, the adjectives proposed by Landis and Koch (1977), which define the following ranges: α>0.80 = near-perfect internal consistency; 0.61<α<0.80 = substantial internal consistency; 0.41<α<0.60 = moderate internal consistency; 0.21<α<0.40 = reasonable internal consistency; α<0.21 = small internal consistency.

The same three groups of questions previously described were used: CAT1 to influence the learning functionalities of the subject, CAT2 for change in study behavior, and CAT3 for viewing preferences. Table 1 records the Cronbach’s Alpha values obtained. It can be seen that, for categories CAT1 and CAT2, almost perfect internal consistencies were obtained, which indicates that the students’ responses were very consistent and followed a reliable pattern. For CAT3, moderate internal consistency was obtained, which indicates that there were some inconsistencies between the responses.

Table 1 Internal consistency measured by Cronbach’s Alpha for questionnaire replies (based on Ferreira et al. (2019))

In an individualized analysis of the classes (Table 2), it was possible to verify that the inconsistency is related only to the T3 class (T1=0.545, T2=0.532, T3=-0.207 and T4=0.657). We believe that this was due to some dispersion or lack of attention of the class during the application of the questionnaires and these individualized results do not invalidate the whole.

Table 2 Internal consistency measured by Cronbach’s Alpha for individual classes, T1 to T4 (based on Ferreira et al. (2019))

Experiment 2

The second experiment was attended by 119 students, divided into six classes of Human-Computer Interaction and Computer Architecture and Organization over three semesters (2016/2, 2017/1 and 2017/2).

All interactions of the students in the system were logged. Thus, it was possible to know how each student behaved online and what activities were performed during his/her studies. This logging occurred throughout all semesters, with consent of students, and interactions were captured and stored as access logs. Interactions with the system were then quantified according to their duration: short interactions (pA), medium interactions (pB) and long interactions (pC). Each login session was analyzed and the proportion of short, medium and long interactions of each student was verified.

After quantifying interactions per user, data clustering was performed to group students with similar characteristics into the same group. In general, the clustering technique classifies entities so that each object is similar to the others in the cluster based on a set of characteristics (in this case, the different levels of interaction). The resulting clusters should have high internal homogeneity (within clusters) and high external heterogeneity (between clusters) (Hair et al. 2009). Clustering was performed using the K-Means algorithm, one of the most objective and popular clustering algorithms available in the literature (Jain 2010). Its principle is to find K clusters in the given data. The algorithm works iteratively to assign each instance to one of the K clusters based on the resources provided. Instances are grouped based on similarity of characteristics. The number of clusters to choose is one of the main questions related to the K-means algorithm. In the literature it is not difficult to find suggestions on what are the best proposals for the grouping process to be successful (Fraley and Raftery 1998). In this sense, the quantity (K=2 and K=3) was chosen mainly due to the nature of the samples and from a study that analyzed the main works on cluster analysis for performance measurement, always observing the number of clusters chosen for these works.

Table 3 presents data obtained from the clustering process. Note in this table the distribution in the amount of instance that appears in each cluster, for example, all instances classified in cluster0 (CL0), 24.03% refer to pA, 11.23% refer to pC and 64.46% refer to pC. It can be seen that cluster2 (CL2) has higher values for pA (33.17) and pB (19.73) and, in turn, lower values for pC (46.67). While cluster1 (CL1) has the highest values of pC (79.37) and lowest of pA (15.18) and pB (5.06). Finally, cluster0 (CL0) has median values for pA (24.03), pB (11.23) and pC (64.46).

Table 3 Average distribution of interactions in each cluster

After the clustering process, statistical tests were performed to verify whether or not there are significant differences in student performance in each cluster: (CL0 with 49 students, CL1 with 52 students and CL3 with 18 students). Figure 10 presents the scatter plots of this strategy in which it is possible to see the distribution of students in each group. In Fig. 9a, the X axis represents values for pC and the Y axis represents values of pB. In Fig. 9b, the X axis represents values for pC and the Y axis represents values of pA.

Fig. 9
figure 9

Cluster distribution. Scatter plots for cluster distribution

The Levene test was used to verify variance homogeneity. The Shapiro-Wilk statistical test was applied to check that the scores in the two tasks had a normal distribution. With these tests, it was concluded that the samples are homogeneous (p-value = 0.118). However, the Shapiro-Wilk test showed that the samples analyzed did not show residual normality (W(p) = 0.908 (0.00)). This guided the choice of the next test to verify the difference between the means: the non-parametric Kruskal-Wallis test, which showed that student behavior interferes with their performance (H(2) = 7.063; p< 0.05). In this case, it was possible to identify that there is a statistically significant difference between the grades of the students that are grouped in the different clusters. In Table 4 you can see that students of CL1 have the highest rank averages and students of CL2 have the lowest.

Table 4 Performance averages and Kruskal-Wallis test results in each cluster (N is the number of students in the given cluster

Students classified in CL1 had the highest grades. This group is precisely the group that has longer access sessions (pC) and fewer short access sessions (pA). Students classified in CL2 are the students with the lowest grades. These students an average profile, those that exhibit neither too long nor too short behavior (Table 3). Students classified in CL0 show more dynamic behavior, making use of the environment less frequently and with many fast accesses.

It is also important to identify where the statistically significant difference really is. For this, the Dunn-Bonferroni approach was used for pairwise comparisons. In Table 5, one can see the difference among the three clusters. There is a significant difference between CL2 and cL1 (p = 0.024). In this case, the group with the highest average (students who access the environment primarily for long periods) and the group with the lowest average (students who usually do not access the system for long periods). The Blox-plot graph in Fig. 10 presents a clearer view of these differences between means. It can be seen that the difference between CL0 and CL1 and between CL0 and CL2 was not as significant as the difference between CL1 and CL2.

Fig. 10
figure 10

Comparison among the 3 clusters. The difference between CL0 and CL1 and between CL0 and CL2 was not as significant as the difference between CL1 and CL2

Table 5 Pairwise comparison test results among clusters (an Adj. Sig. value <0.05 indicates significant difference between the analyzed groups)

By analyzing the behavior of quartiles and medians represented by each cluster, it is possible to notice a greater influence of the variable that represents the long access (pC) in the final result of student performance. This is the variable that most influenced the performance of each cluster, i.e., the higher the occurrence of pC, the higher the average performance in the cluster; the lower the occurrence of pC, the lower the average performance. There is also an influence of short access variables (pA), not as significant as pC, but capable of representing a certain influence on clusters. In the case of the variable pA, the smaller the number of short accesses, the higher the average student performance. Thus, it is possible to realize that the variable pC influences positively on student performance while the variable pA influences somewhat on student performance.

It was possible to prove statistically that there is a relationship between students’ online behavior during the use of the platform and their performance. The Kruskal-Wallis test showed a higher performance in the same group of students (those who access for longer periods) and a lower performance for those who do not access for longer periods. We can divide the group of students who made use of the platform into three profiles: Profile A is the students with the longest access and the shortest and shortest access; Profile B are students who access the system with less long accesses and more short and medium accesses; and Profile C are the students with the most average behavior, being the average for short, medium and long accesses. It can be statistically concluded that students of Profile A have the highest grades, while students of Profile B have the lowest grades. The students of Profile C fall into a region of more uncertainty, in which it is not possible to make statements with strong statistical foundations.

Conclusions

The use of technology as a mechanism to aid the teaching-learning process is currently a trend. Computational techniques can assist in the personalization, integration and visualization of new pedagogical strategies to support educational activities, making the teaching-learning process simpler, more dynamic and attractive to students. This article revisits OSM-V, a model for student assessment visualization in educational systems. OSM-V explores Open Student Modeling to create an efficient and intelligent approach, based on probabilistic and semantic fundamentals, that allows students to track their entire knowledge acquisition process, while instructors are able to supervise their progress and anticipate proper pedagogical strategies.

Two experiments were carried out to analyze the impact caused by OSM-V in students’ satisfaction and online behavior when using our model. The first experiment was carried out with responses from 139 undergrad students. Results showed that the tool positively influenced their perception of utility, indicating that, in several situations, students believed that the proposal can positively impact learning. It was also noticed that, for a large number of students, the tool also influenced their study behavior in the virtual environment. In the second experiment, a clustering algorithm was used to help define different groups from the perspective of how they use the environment, that is, their interaction profile. Clustering algorithms are interesting for this type of research because they aim to group instances (students) with similar characteristics into the same group, thus allowing the identification of strategic profiles for a more consistent and reliable analysis (Hair et al. 2009). From a statistical perspective, the Kruskal-Wallis test was used to verify if there are significant differences among the analyzed groups.

The results of this study can help both instructors and students. The former, when deciding on the use of a ubiquitous platform to assist in the classroom teaching process, as well as having a sense of the which students’ online behavior are most likely to have positive results in their performance when using the platform. The latter figuring out how best to behave by seeking a better performance during their studies.

Limitations

Some limitations were noted throughout the development of our approach, either because of proposal scope issues or more related to implementation of the approaches.

First, our model was designed to work best when applied to facts, concepts and simple procedures rather than to complex, dynamic and ill-structured problems.

A second limitation is related to the groups participating in the experiments, since all of them came from technological courses (Computer Science or Information Systems) taught by Computer teachers. Even though it is a limitation, the fact that students have a knowledge in computational subjects does not invalidate the proposal, in contrast, in the case of Experiment 1, they even contributed positively in identifying some interface design problems.

It is important to highlight that these limitations are intrinsic to the process of development of the proposed approach. It is noteworthy that such limitations do not influence the real quality of the work, as most of them are related to implementation issues and availability of experiments, and not to a structural issue of the proposed model.

Future work

Potential future work includes: a study on the feasibility and effectiveness of new forms of data visualization, such as skillmeters, knowledge maps, area charts and scatter plots; the creation of mechanisms for automatic recommendation of content, for students, and of pedagogical approaches, for instructors; the extension of the use of the proposed model to other virtual learning environments and, consequently, to other classes and courses; and, finally, the exploration of OSM-V for self-regulated learning.

Availability of data and materials

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request. 2019 IEEE. Reprinted, with permission, from Ferreira, H.N.M., de Oliveira, G.P., Araújo, R.D., Dorça, F.A., Cattelan, R.G. (2019). An open model for student assessment visualization, In Proceedings of the 19th IEEE International Conference on Advanced Learning Technologies (pp. 375-379).

Notes

  1. https://www.bayesfusion.com/genie-modeler

  2. https://www.bayesfusion.com/smile-engine

References

  • Araújo, R.D., Brant-Ribeiro, T., Cattelan, R.G., de Amo, S.A., Ferreira, H.N.M. (2013). Personalization of interactive digital media in ubiquitous educational environments. In Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics. https://doi.org/10.1109/SMC.2013.675, (pp. 3955–3960).

  • Araújo, R.D., Ribeiro, T.B., Mendonça, I.E., Mendes, M.M., Dorça, F.A., Cattelan, R.G. (2017). Social and collaborative interactions for educational content enrichment in ULEs. Journal of Educational Technology & Society, 20, 133–144.

    Google Scholar 

  • Becheru, A., Calota, A., Popescu, E. (2018). Analyzing students’collaboration patterns in a social learning environment using studentviz platform. Smart Learning Environments, 18(5).

  • Brusilovsky, P. (2001). Adaptive hypermedia. User Modeling and User-Adapted Interaction, 11(1-2), 87–110.

    Article  Google Scholar 

  • Bull, S., & Kay, J. (2013) In Azevedo, R., & Aleven, V. (Eds.), Open Learner Models as Drivers for Metacognitive Processes, (pp. 349–365). New York: Springer.

    Google Scholar 

  • Bull, S., & Wasson, B. (2016). Competence visualisation: Making sense of data from 21 st-century technologies in language learning. ReCALL, 28(02), 147–165.

    Article  Google Scholar 

  • Clemente, J., Ramírez, J., de Antonio, A. (2011). A proposal for student modeling based on ontologies and diagnosis rules. Expert Systems with Application, 38(7), 8066–8078.

    Article  Google Scholar 

  • De Los Reyes, D.A.G., Thomas, E.A., da Rosa, L.L., Neto, W.P.G. (2019). Student success prediction: An analysis of the demand for a transfer learning approach. Brazilian Journal of Computers in Education - RBIE, 27(01), 01.

    Google Scholar 

  • Dorça, F.A., Araújo, R.D., Carvalho, V., Resende, D., Cattelan, R.G. (2016). An automatic and dynamic approach for personalized recommendation of learning objects considering students learning styles: an experimental analysis. Informatics in Education, 15(1), 45–62.

    Article  Google Scholar 

  • Ferreira, H.N.M., Ribeiro, T.B., Araújo, R.D., Dorça, F.A., Cattelan, R.G. (2016). An automatic and dynamic student modeling approach for adaptive and intelligent educational systems using ontologies and bayesian networks. In Proceedings of the 28th International Conference on Tools with Artificial Intelligence. https://doi.org/10.1109/ICTAI.2016.0116, (pp. 738–745).

  • Ferreira, H.N.M., Araújo, R.D., Dorça, F.A., Cattelan, R.G. (2017a). Open student modeling for academic performance visualization in ubiquitous learning environments. In Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics. https://doi.org/10.1109/SMC.2017.8122679, (pp. 641–646).

  • Ferreira, H.N.M., de Oliveira, G.P., Araújo, R.D., Dorça, F.A., Cattelan, R.G. (2019). An open model for student assessment visualization. In Proceedings of the 19th IEEE International Conference on Advanced Learning Technologies. https://doi.org/10.1109/ICALT.2019.00117, (pp. 375–379).

  • Ferreira, H.N.M., Ribeiro, T.B., Araújo, R.D., Dorça, F.A., Cattelan, R.G. (2017b). An automatic and dynamic knowledge assessment module for adaptive educational systems. In Proceedings of the 17th International Conference on Advanced Learning Technologies. https://doi.org/10.1109/ICALT.2017.86, (pp. 517–521).

  • Fraley, C., & Raftery, A.E. (1998). How many clusters? which clustering method? answers via model-based cluster analysis. The Computer Journal, 41(8), 578–588.

    Article  Google Scholar 

  • Ginon, B., Boscolo, C., Johnson, M.D., Bull, S. (2016). Persuading an open learner model in the context of a university course: An exploratory study. In Proceedings of the 13th International Conference on Intelligent Tutoring Systems. https://doi.org/10.1007/978-3-319-39583-8_34, (pp. 307–313).

    Google Scholar 

  • Greiff, S., Niepel, C., Scherer, R., Martin, R. (2016). Understanding students’ performance in a computer-based assessment of complex problem solving: An analysis of behavioral data from computer-generated log files. Computers in Human Behavior, 61, 36–46.

    Article  Google Scholar 

  • Guerra, J., Hosseini, R., Somyurek, S., Brusilovsky, P. (2016). An intelligent interface for learning content: Combining an open learner model and social comparison to support self-regulated learning and engagement. In Proceedings of the 21st International Conference on Intelligent User Interfaces. https://doi.org/10.1145/2856767.2856784, (pp. 152–163).

  • Hair, J., Anderson, R., Babin, B. (2009). Multivariate Data Analysis, 7th edn: Prentice Hall.

  • Harris, S.C., & Kumar, V. (2018). Identifying student difficulty in a digital learning environment. In Proceedings of the IEEE 18th International Conference on Advanced Learning Technologies. https://doi.org/10.1109/icalt.2018.00054, (pp. 199–201).

  • Hartley, D., & Mitrovic, A. (2002). Supporting learning by opening the student model. In Proceedings of the 6th International Conference on Intelligent Tutoring Systems. https://doi.org/10.1007/3-540-47987-2_48, (pp. 453–462).

    Chapter  Google Scholar 

  • Hsiao, I.-H., Bakalov, F., Brusilovsky, P., König-Ries, B. (2013). Progressor: social navigation support through open social student modeling. New Review of Hypermedia and Multimedia, 19(2), 112–131.

    Article  Google Scholar 

  • Ilves, K., Leinonen, J., Hellas, A. (2018). Supporting self-regulated learning with visualizations in online learning environments. In Proceedings of the 49th ACM Technical Symposium on Computer Science Education. SIGCSE ’18. https://doi.org/10.1145/3159450.3159509, (pp. 257–262).

  • Jacovina, M.E., Snow, E.L., Allen, L.K., Roscoe, R.D., Weston, J.L., Dai, J., McNamara, D.S. (2015). How to visualize success: Presenting complex data in a writing strategy tutor. In Proceedings of the 8th International Conference on Educational Data Mining, (pp. 594–595).

  • Jain, A.K. (2010). Data clustering: 50 years beyond k-means. Pattern Recognition Letters, 31(8), 651–666.

    Article  Google Scholar 

  • Landis, J.R., & Koch, G.G. (1977). The measurement of observer agreement for categorical data. Biometrics, 33(1), 159–174.

    Article  Google Scholar 

  • Li, N., Cohen, W.W., Koedinger, K.R., Matsuda, N. (2011). A machine learning approach for automatic student model discovery. In Proceedings of the 4th International Conference on Educational Data Mining, (pp. 31–40).

  • Lindstaedt, S.N., Beham, G., Kump, B., Ley, T. (2009). Getting to know your user – unobtrusive user model maintenance within work-integrated learning environments. In Proceedings of the 4th European Conference on Technology Enhanced Learning. https://doi.org/10.1007/978-3-642-04636-0_9, (pp. 73–87).

    Google Scholar 

  • Long, Y., & Aleven, V. (2013). Supporting students’ self-regulated learning with an open learner model in a linear equation tutor. In International Conference on Artificial Intelligence in Education. https://doi.org/10.1007/978-3-642-39112-5_23, (pp. 219–228).

    Google Scholar 

  • Long, Y., & Aleven, V. (2016). Enhancing learning outcomes through self-regulated learning support with an open learner model. User Modeling and User-Adapted Interaction, 1–34. https://doi.org/10.1007/s11257-016-9186-6.

    Article  Google Scholar 

  • Mabbott, A., & Bull, S. (2006). Student preferences for editing, persuading, and negotiating the open learner model. In Proceedings of the 8th International Conference on Intelligent Tutoring Systems. https://doi.org/10.1007/11774303_48, (pp. 481–490).

    Chapter  Google Scholar 

  • Mitrovic, A., & Martin, B. (2002). Evaluating the effects of open student models on learning. In Proceedings of the 2nd International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems. Springer, Berlin, (pp. 296–305).

    Chapter  Google Scholar 

  • Mitrovic, A., & Martin, B. (2007). Evaluating the effect of open student models on self-assessment. International Journal of Artificial Intelligence in Education, 17(2), 121–144.

    Google Scholar 

  • Mitrovic, A., & Thomson, D. (2009). Towards a negotiable student model for constraint-based ITSs. In Proceedings of the 17th International Conference on Computers in Education. https://doi.org/10.1142/s1793206810000797, (pp. 83–90).

    Article  Google Scholar 

  • Nguyen, Q., Rienties, B., Toetenel, L., Ferguson, R., Whitelock, D. (2017). Examining the designs of computer-based assessment and its impact on student engagement, satisfaction, and pass rates. Computers in Human Behavior, 76, 703–714.

    Article  Google Scholar 

  • Self, J.A. (1990). Bypassing the intractable problem of student modelling. Intelligent tutoring systems: At the crossroads of artificial intelligence and education, 41, 1–26.

    Google Scholar 

  • Vesin, B., Mangaroska, K., Giannakos, M. (2018). Learning in smart environments: user-centered design and analytics of an adaptive learning system. Smart Learning Environments, 24(5).

Download references

Acknowledgments

The authors are grateful for the support of FAPEMIG, CNPq, IFSULDEMINAS, FACOM/PPGCO/PROAP/UFU, PROPP/UFU and PET/MEC/SESu. This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001.

Author information

Authors and Affiliations

Authors

Contributions

HF designed and developed OSM-V as part of his doctoral thesis and performed all experiments and analysis. GPO implemented the graphical Web interfaces and integrated them into CX. RA was a key contributor to OSM-V architecture and its integration to the CX platform. FD and RC were the project’s advisors and were major contributors in writing the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Renan Cattelan.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ferreira, H., de Oliveira, G.P., Araújo, R. et al. Technology-enhanced assessment visualization for smart learning environments. Smart Learn. Environ. 6, 14 (2019). https://doi.org/10.1186/s40561-019-0096-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40561-019-0096-z

Keywords