- Research
- Open access
- Published:
Multi-agent system-based framework for an intelligent management of competency building
Smart Learning Environments volume 11, Article number: 41 (2024)
Abstract
To measure the effectiveness of learning activities, intensive research works have focused on the process of competency building through the identification of learning stages as well as the setup of related key performance indictors to measure the attainment of specific learning objectives. To organize the learning activities as per the background and skills of each learner, individual learning styles have been identified and measured by several researchers. Despite their importance in personalizing the learning activities, these styles are difficult to implement for large groups of learners. They have also been rarely correlated with each specific learning stage. New approaches are, therefore, needed to intelligently coordinate all the learning activities while self-adapting to the ongoing progress of learning as well as to the specific requirements and backgrounds of learners. To address these issues, we propose in this paper a new framework for an intelligent management of the competency building process during learning. Our framework is based on a recursive spiral Assess-Predict-Oversee-Transit model that is orchestrated by a multi-agent system. This system is particularly responsible of enabling smart transitions between learning stages. It is also responsible of assessing and predicting the process of competency building of the learner and, then, making the right decisions about the learning progress, accordingly. Results of our solution were demonstrated via an Augmented Reality app that we created using the Unity3D engine to train learners on Air Conditioner maintenance.
Introduction
Recent advances in digital transformation tools and approaches are resulting into tremendous shifts in the educational system. In fact, from the learner perspective, we are witnessing drastic changes in terms of commitments, behaviours, engagement, motivations, and learning. These changes are challenging education leaders, faculty, and researchers to fundamentally rethink the nature and the purpose of education and, thus, how it should be delivered. Within this perspective, an important ongoing trend is about shifting from a teacher-focused paradigm to a student-focused one (Olivos et al., 2016). More precisely, we are moving from theory-focused to heavy technology-oriented practical classes, where educators need advanced concepts, theories, and strategies to make appropriate pedagogical transitions (Olivos et al., 2016). Despite the use of a wide range of emergent technologies (e.g., Virtual Reality, Internet of Things, etc.) that have proven their efficiencies in improving the learning process, it is still of paramount important to assess thoroughly the learners' competency building process.
Competency is a fuzzy concept that focuses on a person’s behaviours and attributes (Wong, 2020). It can be defined as the set of skills, knowledge, and capabilities that a learner should have acquired when carrying out assigned tasks or accomplishing the expected goals (Chung & Lo, 2007). It can also be defined through several perspectives, such as the quality of the training, its outputs, as well as its impact on the skills and knowledge of the learners (Bowden & Masters, 1993; Rutherford, 1995). Several classifications have been proposed for competency (e.g., Dreyfus & Dreyfus, 1986; Murray & Donegan, 2003; Oates, 2003). In these classifications, numerous stages of learning have been identified, including novice, beginner, competent, and expert. In each of these stages, the learner is expected to fulfil related learning objectives with expected skills and knowledge. Several methods have been proposed to measure the learning attainment of learners. Some approaches (e.g., Bodea & Toader, 2013; Paquette, 2010) have defined models for the identification and calculation of Key Performance Indicators (KPIs), which represent quantitative indexes for the assessment of competency. Other approaches (e.g., Harrow, 1972; Lahtinen & Ahoniemi, 2005) have been based on the Bloom taxonomy to identify competency levels.
To improve the impact of learning on competency, several research works (e.g., Al Shaikh et al., 2019; Bifano, 2023; Espinoza-Poves et al., 2019) have focused on the identification of the learning style of each learner. A learning style can be defined here as the process by which a learner understands and retains information, thus earning knowledge or competencies (Adesunloye et al., 2008). To identify the appropriate learning styles of learners, Learning Style Inventories (LSIs) have been designed and widely used. The LSI typically takes the form of a questionnaire where respondents select the answers that most closely resemble their own preferences. Many models and measures of learning styles have been described in the literature (Romanelli et al., 2009), including Gardner's Multiple Intelligence Theory (Windsor et al., 2008), Kirkpatrick Patrick model (Kirkpatrick, 1996), and Kolb’s Learning Inventory (Kolb & Kolb, 2019). Despite critics due to the lack of empirical evidence (Riener & Willingham, 2010), learning style theories remain widely used (Allcoat & Mühlenen, 2018). In fact, we witnessed additional attraction to these theories during the COVID-19 pandemic (e.g., Albeta et al., 2021; Malik et al., 2021), where the proposed studies mainly focused on identifying the appropriate learning approaches to follow during online trainings. The impact of learning styles on e-learning was particularly debated (Truong, 2016), as for example how best to design adaptive virtual learning environments whilst considering learning styles (El-Sabagh, 2021). Furthermore, since competency is more likely to grow during the learning process, the learning style of the learner may change over time, depending on his learning stage. Few research works have investigated the interdependency between learning stages and learning styles (e.g., İlçin et al., 2018). Debates have happened, for example, about the effectiveness of engaging students with material that require critical thinking but are less suited to learning more concrete information, such as for physics or chemistry (Camp & Schnader, 2010).
Based on the results outlined in the literature, we argue that further investigations are still needed to identify the right learning style for the right learner, during the right learning stage. We also argue that new solutions are still needed to autonomously manage transitions from one learning stage to another, without the need for long, hectic manual interventions. To achieve these objectives, we propose in this paper a new framework for an intelligent management of competency building and learning progress. Our framework aims to identify the earning style of the learner in each learning stage, carries out the necessary analytics to predict and monitor the learning outcomes, and then make appropriate recommendations about the progress to the next learning stage, with options of carrying out some of the unachieved KPIs to the next level. Our paper includes the following contributions: (1) A new framework for an intelligent management of learning progress; (2) An iterative spiral Assess-Predict-Oversee-Transit model for the management of learning progress; and (3) An agent-based solution for the management of competency building.
In the reminder of this paper, section "Related work" will outline the fundamental concepts and main works related to competency building and learning management. Section "Proposed solution" will shed lights on our proposed solution for an intelligent management of learning progress. Section "Implementation and discussions" will describe our Augmented Reality prototype that we created for the management of the learning progress during Air Conditioner maintenance trainings.
Related work
The assessment of the effectiveness of learning has been addressed by intensive research works (e.g., (Alsalamah & Callinan, 2021; Duignan, 2001; Farjad, 2012; Khandker et al., 2010). This effectiveness, which is commonly investigated with respect to the fulfilment of desired learning targets (Devi & Shaik, 2012), can be partially examined from the feedback to learners. Feedback can represent here any attempt to collect information on the impact of a training and to assess the value of the learning process considering that information (Topno, 2012). They also represent a fundamental part of a continuous quality assurance cycle (McNamara et al., 2010) that ultimately aims to analyse and monitor the process of competency building. Competency is basically meant to upskill the knowledge and know-how of the learners. To understand this concept, it is important to explore its relations with learning stages, learning styles, and learning progress.
According to the literature, the concept of competency is often defined through three perspectives: (i) The output (based on the observable performance of the learning processes) (Bowden & Masters, 1993); (ii) The quality of the training (refers to the lowest satisfactory level of performance) (Rutherford, 1995); and (iii) The traits (denote the fundamental attributes of an individual such as skills, knowledge, or abilities) (Kolligian & Sternberg, 1990). The competency may involve several elements, including (Rothwell & Kazanas, 1992): (i) The learning situation (this is the requirement’s origin that will put the competency into action); (ii) The required attributes (e.g., attitudes, knowledge, and skills) which are required from each learner to exhibit according to the current learning situation; (iii) The response (i.e. the action); and (iv) The outcomes or the consequences (i.e. the action’s final results and their related performance with respect to quality standards). To delimit the features of competency, several classifications have been proposed. For example, the author in Murray and Donegan (2003) has proposed broad competency subsystems, including management, technical learning, and operational. In addition, the authors in Oates (2003) have categorized competency into generic (related to the components of skills), occupational (concerns the level of an occupation in terms of activities at a high level of generality), task-specific (specific competences that are not related to explicit jobs), job-specific (describes the execution way of the task within a particular work system), and person-specific (related to the way a task is executed by an individual within a designated work system).
Based on the classifications proposed for competency, several related models have been suggested, including the widely used Skills Acquisition Model (Dreyfus & Dreyfus, 1986). This model is commonly deployed to explain the development of competency of a given individual. It includes the following five learning stages that delimit the process of skill acquisition (Fig. 1) (Dreyfus & Dreyfus, 1986):
-
Novice The individual is highly dependent on the mentor. He needs well-defined directions. He abides by the rules and applies them with devotion. He does not consider himself responsible for the outcomes. He does not identify any learning pattern, particularly since everything in his environment is rather new to him.
-
Advanced beginner The individual is more familiar with trends and patterns. He can recognize any new situation where the rules can be applied. However, he does not feel accountable for the outcomes of these rules. Indeed, he is still dependent on the mentor to make the right decisions while solving problems.
-
Competent The individual knows the common problems and the solutions. He follows the rules and applies them with conscious reasoning. He feels responsible of the outcomes of his actions.
-
Proficiency Based on his extended experience, the individual deploys analytical skills and pattern recognition to recognize the problems and the rules to apply as well as to formulate the expected solutions.
-
Expertise The individual often has advanced experience. He can immediately see what is occurring and how to address the situation. He can grasp new materials more quickly than forgetting old materials. He can recognize patterns without efforts. He is very confident and perfectly knows his abilities and limits.
Several methods have been proposed to measure, monitor, and enhance the performance of learners. In this regard, the authors in Rothwell & Kazanas (1992) have used a set of quantitative indicators, related quantitative requirements, the Delphi survey technique (i.e. series of questionnaires to create theories and opinions about the learning process), and Fuzzy Set Theory to meet the learning objectives and display its progress using graphical indicators. The authors in Yeung et al. (2009) have defined a model for the identification and calculation of learning KPIs. This model was adopted in Masron et al. (2011), where the authors have defined three categories of competency: Methodical, personal-social, and strategic-organizational. The authors in Paquette (2010) have proposed several factors to measure competency, including the context of usage, frequency, autonomy, practicability, and complexity of tasks. They have then mapped them with respect to a set of levels of performance (Table 1). The authors in Toader and Brad (2015) have investigated competency levels with the use of Bloom taxonomy. To this end, they have defined six stages of cognitive domain (i.e. knowledge, comprehension, application, analysis, synthesis, and evaluation) to assess learning criteria, depending on the envisioned learning activities. Furthermore, the Kirkpatrick Patrick model (Kirkpatrick, 1996) is being used as an important approach to measure the impact of any training program on competency building. It includes four levels: Reaction, learning, behaviour, and results. The reaction level typically measures if the learner found the training useful, engaging, and appropriate. The learning level measures if the learner acquired the skills, knowledge, confidence, commitment, and attitude addressed by the training. The behaviour level measures the changes on the behaviour of the learner after completing the training. More precisely, it measures if the learner is applying what was learnt. The results level measures if the expected outcomes from the training were achieved. This is including any financial or morale impact of the training.
To build the right competency, it is of paramount necessity to investigate the learning styles of each individual learner. To model and measure these styles, several solutions have been proposed in the literature (e.g., Romanelli et al., 2009; Windsor et al., 2008). The widely adopted Kolb Learning Cycle model (Kolb & Kolb, 2019) is mainly relying on the statement that the more an individual reflects on a given task, the more he has opportunities to refine his efforts. This model (Fig. 2) allows for small and incremental improvements. It includes four learning stages: Experiencing (i.e. being immersed in the doing), reflection (i.e. reviewing what was learned), conceptualization (i.e. interpreting the task), and planning (i.e. predicting consequent actions). To measure the learning styles of learners, the model uses a Learning Style Inventories (LSIs) method. The method is basically a short survey that aims to classify the learning preferences of a learner as diverging, assimilating, converging, or accommodating. More precisely, a learner who prefers a diverging learning style demonstrates strong ability in imagination. He achieves well in brainstorming situations. An individual who prefers an assimilating learning style likes to create theoretical models. On the one hand, he has high interest in theory and logic as well as high concerns on abstract concepts. On the other hand, he has less interest in people. A learner who prefers a converging learning style likes the ideas’ practical applications and follows hypothetical-deductive reasoning. He remains comparatively impassive and prefers to handle things instead of people. A learner who prefers an accommodating learning style likes performing things, taking risks, and having new experiences. He can adapt immediately to specific circumstances. However, he may be seen as pushy and impatient.
Our thorough investigation of the current literature is showing an untapped research avenue concerning the creation of systematic management of learning progress. More precisely, we are arguing that there is a critical need for solutions where: (1) The process of competency building is flexibly adapted to the appropriate learning style of the learner for the current learning stage; and (2) The learning process and progress are intelligently assessed and triggered, accordingly. This would particularly create self-adapting learning mechanisms toward more personalized learning and better competency building.
Proposed solution
The iterative assess-predict-oversee-transit model
Without loss of generality, we assume that the learning process of any given learner includes n Stages of Learning (SoL) (i.e. Learning = {SoL1, SoL2, …, SoLn|n is the number of SoLs}). These stages may refer to the learning steps in the model of Maslow (Zull, 2002) or to the levels of learning in the model of Kirkpatrick (Maslow, 1943). In the specific model of Dreyfus and Dreyfus (Dreyfus & Dreyfus, 1986), the stages are represented as a ladder from novice to expert (see Fig. 1). For any given SoL m, we assume that the learner must demonstrate a set of skills, which will be assessed according to several KPIs (Fig. 3a). We assume in this paper that the SoLs are ordered (i.e. SoLi is the pre-requisite of SoLi+1. In other words, to pass a given SoLi, the learner must pass all the KPIs of this stage. This condition may be related by allowing a learner to carry out some KPIs that he failed to the next SoL). To manage the learning progress, we propose in this paper an iterative process called APOT, including the Assess, Predict, Oversee, and Transit steps (Fig. 3b). The APOT model is designed to ensure a continuous and adaptive learning experience, tailored to the needs and performance of each learner. Its steps are as follows:
-
Assess step The assess step includes a survey based, for example, on Kolb theory. It will allow for the identification of the learning style of the learner. It will be particularly used to identify the best learning approach for the learner to follow and anticipate its impact on the learner’s achievements. As such, this can ultimately be used to enhance the learner’s meta-cognition and self-awareness of his own weaknesses and strengths, particularly since most of the learners are unaware of their learning styles (Kirkpatrick & Kirkpatrick, 2009). In this step, various assessment tools and techniques (e.g., self-assessment questionnaires, observation checklists, interviews, and diagnostic tests) are employed to gather data on the learner's current knowledge, skills, and preferences. The outcome is a comprehensive understanding of the learner's profile, which serves as a foundation for personalized learning strategies.
-
Predict step Based on the results of the assess step, a subsequent prediction step is executed to foresee the performance of the trainee. The importance of this step lays in identifying opportunities to provide trainees with better personalized learning in the upcoming steps. This step involves using predictive analytics and modeling (e.g., regression analysis, machine learning algorithms, decision trees, and neural networks) to forecast the learner's potential performance in future tasks or stages. By leveraging historical data and assessment results, educators can anticipate areas where the learner might excel or struggle, allowing for proactive adjustments to the learning plan.
-
Oversee step The third step (i.e. Oversee step) is about monitoring the training progress and collecting related data on the performance of the trainee. Continuous monitoring and oversight ensure that the learner stays on track. This step involves regular check-ins, feedback sessions, and performance tracking. Data collected during this stage helps in identifying trends, spotting potential issues early, and making necessary interventions to support the learner's progress.
-
Transit step The Transit step is about revising the performance of the trainee. It concerns the identification of the aspects of the learning that the trainee has passed (i.e. learning outcomes successfully completed) and those that he has failed. In this step, the learner's performance is thoroughly reviewed to determine readiness for progression to the next stage of learning. This involves evaluating which KPIs have been met and addressing any areas of deficiency. The transition decision is made based on whether the learner has sufficiently demonstrated the required competencies for the current stage. The APOT cycle will then be applied for the next SoL, ensuring a structured yet flexible progression through the learning process.
The decision-making process within the APOT model is data-driven and learner-centric. At each step, decisions are based on quantitative and qualitative data collected during the assessment, prediction, oversight, and transit steps. This ensures that each learner receives a personalized and effective learning experience, with adjustments made in real-time to address their unique needs and challenges.
Agent-based management of competency building
Several research works have proposed solutions to manage the process of competency building based on learning scales or stages (e.g., Menaka & Nandhini, 2019; Russo, 2016; Simsek et al., 2021). In these works, the transition from one stage to another is commonly performed based on a set of skills or learning outcomes that the learner must achieve. This transition lacks flexibility as it does not appropriately adapt to the individual learning progress of each trainee. To address this issue, we propose to enable smart transitions between learning stages via the use of a Multi-Agent System (MAS) approach (Fig. 4). The use of a MAS provides a decentralized and adaptive approach, ensuring that each learner's progress is managed efficiently and tailored to their individual needs. The intelligent agents in our MAS will be responsible for the assessment of the process of competency building of the learner according to our APOT model and, then, responsible for making the right decisions about the learning progress, accordingly.
Our MAS solution includes several types of agents (Figs. 5 and 6), including Learning Style Assessment Agent (LSAA), SoL Prediction Agent (SoLPA), SoL Overseeing Agent (SoLOA), and SoL Transition Agent (SoLTA). These agents are assigned to the steps of our APOT model (i.e. assessment, prediction, overseeing, and transition, respectively).
-
Learning style assessment agent (LSAA) This agent is responsible for conducting the initial assessment of the learner's preferred learning styles and strengths. Using surveys and assessment tools based on theories such as Kolb's, the LSAA gathers data that helps tailor the learning experience to the individual.
-
SoL prediction agent (SoLPA) Based on the data provided by the LSAA, the SoLPA predicts the learner's future performance. This agent uses predictive analytics to identify potential areas of success and difficulty, enabling proactive adjustments to the learning plan.
-
SoL overseeing agent (SoLOA) The SoLOA continuously monitors the learner's progress, collecting performance data and providing ongoing feedback. This agent ensures that the learning process is on track and makes real-time adjustments as needed to support the learner.
-
SoL transition agent (SoLTA) When the learner reaches the end of a Stage of Learning (SoL), the SoLTA reviews their performance and decides if they are ready to move on to the next stage. This agent evaluates which KPIs have been met and identifies any areas that need further development.
Furthermore, each KPI in our solution is managed by a dedicated KPI Agent (KPIA). Every KPIA agent will use the assessments received from the LSAA agent. It will also use the previous performance of the trainee to predict his future performance. To summarize the performance of the trainee during a given SoL as well as prepare the transition to the next SoL, we assign a dedicated software agent to each SoL (Fig. 5). This agent, called SoL Agent (SoLA), receives frequent updates from the SLAA, SoLPA, SoLOA, and the SoLTA agents of its SoL. The SoLA acts as a coordinator, integrating data and recommendations from all the specialized agents to provide a holistic view of the learner's progress. It ensures that the learner's journey through each SoL is coherent and aligned with their personal learning needs. The SoLA is particularly responsible for advising the next SoLA on the KPIs that the trainee failed as well as on any recommendations received from the agents of its SoL. This inter-agent communication facilitates smooth transitions and continuous improvement in the learner's competency building process.
The interaction among these agents is pivotal for ensuring seamless transitions and effective management of the learning process. Coordination is facilitated through a shared communication protocol that allows agents to exchange information about the learner's status, performance metrics, and recommendations. This decentralized approach ensures that decisions are made collaboratively and based on comprehensive, real-time data.
Performance assessment
To assess the performance of a given trainee, we propose to calculate three main values that we call here Learning Index (\(LI\)), Stage of Learning Index (\(SoLI\)), and Competency Index (\(CI\)). The \(LI\) represents the performance progress of the trainee during the attainment of a given KPI. It is calculated for a specific KPI \(j\) as follows:
where \({s}_{new}\), \({s}_{old}\), and \({s}_{max}\) refer to the new score, previous score, and maximum score respectively that can be obtained by the trainee (we assume here that the initial scores are set to \(0\)). The parameter \(w\) refer to a specific weight linking the importance of the KPI to the knowledge level of the trainee. This weight can be defined by the owner of the training. It can also be inspired from several existing works (e.g., Abdel-Maksoud & Saknidy, 2016; Borich, 1980; Misanchuk, 1984) that have focused on estimating and ranking training needs. These works have mainly relied on quantitative methods to calculate the average level of knowledge, the average degree of importance, and the incongruity between knowledge and importance (Abdel-Maksoud & Saknidy, 2016). In this regard, Borich (1980) has proposed the first equation to assess the needs for trainings by linking importance and knowledge as follows:
where \(N\), \(I\), \(K\), and \(\overline{K}\) refer to the raining need, importance, knowledge, and mean importance, respectively. Misanchuk (1984) has relied on multivariate analysis to review data across any number of learners and educational components. To this end, the author has proposed the Delta N method that suggests error weights for the cell distribution and assumes some probability distribution of the marginal values. The method ultimately aims to calculate one numerical index of training needs for each skill, for all the trainees. This index is calculated as follows:
where \(R\) refer to the maximum value in the scale of importance, \(C\) refer to the maximum value in the scale of knowledge, \({W}_{ij}\) refer to the error weight for cell \((i,j)\), \({P}_{ij}\) refer to the probability of a randomly sampled observation falling into cell \((i,j)\), and \({P}_{i}\) and \({P}_{j}\) refer to the expected marginal probabilities for rows \({R}_{i}\) and column \({C}_{j}\).
The works above have used a matrix that specifies training weights based on dedicated scales for the importance of the training topic as well as for the related knowledge level of the trainee (Fig. 7a). In order to illustrate our approach, let us assume that the importance and the knowledge scales have \(n\) and \(m\) levels, respectively. For any given knowledge level \(i\) and importance \(j\) level, there will be a weight \({w}_{ij}\). This weight will be applied for the coresponding KPI, if any.
To illustrate the concept of learning index, let us consider the specific example of Table 2, where we assume that the current SoL includes 5 KPIs. We also assume specific values for previous learning indexes. We use our Eq. (1) to calculate the new learning indexes. For example, the learning index for KPI 1 is calculated as follows:
The \(SoLI\) represents the overall performance of the trainee during a given Stage of Learning (SoL). This index is calculated for a specific SoL \(k\) as follows:
where \(p\) refer to the number of KPIs in the SoL.
Using the example of Fig. 7 and Table 2, the \(SoLI\) during the SoL \(k\) will be calculated using Eq. (4) as follows:
The Competency Index (\(CI\)) represents the overall progress of the trainee. This index is calculated as follows:
where \(q\) refer to the number of learning stages.
Implementation and discussions
Motivating scenario
We consider in this paper the case-study of a training on Air Conditioner (AC) maintenance. Experts are commonly training the learners during face-to-face sessions, where the performance of a one-fits-all approach that does not take into consideration the specific learning styles of learners remains limited. Furthermore, following a one-to-one coaching approach will be time consuming and cost ineffective. To overcome these shortcomings, we created an Augmented Reality (AR) app to improve competency building during AC maintenance trainings. The use of AR is mainly motivated by its proven capability to bring new means of immersive professional training opportunities. Indeed, by extending the actual working environment with virtual instructional content, AR can support situational cognition and practical learning (Bower et al., 2014; Kansal & Singhal, 2018) as well as promote technical learning in professional training through a variety of visual digital hints (e.g., symbols, 3D objects, animation). AR can also accommodate group learning and shared learning experiences (Abhari et al., 2015). Furthermore, by allowing an easy creation, modification, and replication of digital artificats, AR can significantly decrease the cost for trial-and-error repetitions.
Prototype
The prototype of our AR app is created using the Unity3D engine. It includes a 3D model of an AC that was customized with the Blender software. The animations in the prototype are created using C# scripts. The training performance are handled using a java-based multi-agent system. In our AR app prototype, trainees are requested to follow a set of steps to solve the specific problem of gas leak. Cues about these steps will be provided to the trainees based on scores obtained during their learning progress. We summarize in Fig. 8 these operations, where the SoLA agent of a given stage of learning will receive the predictions from the KPIs agents and then analyses the performance of the trainee, accordingly. Based on this analysis as well as based on the score received during the previous attempts, a set of questions will be proposed to the trainee.
The assessment of the performance of the trainee in our prototype is currently decided based on the number of attempts, the correct actions, as well as the time spent on each task. For the sake of illustration, the learning levels considered in this paper are Novice, Beginner, Proficient, and Expert (Fig. 9). As depicted in Fig. 9a–d, the hints given to the trainee changes based on the progress of his competency.
Limitations and future directions
Despite the promising benefits of our proposed APOT and MAS approach, there are several limitations to acknowledge. More specifically, the complexity of implementing and maintaining a MAS can be high, requiring significant computational resources and expertise in both Artificial Intelligence (AI) and educational technologies. In addition, the success of the system heavily relies on the quality and accuracy of the input data. Indeed, poor data can lead to incorrect assessments and predictions. Furthermore, the effectiveness of the AR app is still limited. The current prototype must be extended with a more comprehensive training and improved feedback to users. The hardware capabilities and user familiarity with AR technology may be major hindrances to the adoption of our AR solution.
To overcome the abovementioned limitations, we are aiming to integrate more advanced machine learning algorithms to improve the accuracy of predictions and assessments. We are also aiming to expand the AR app to support a wider range of learning scenarios and technical skills beyond AC maintenance. Moreover, developing a more user-friendly interface and providing comprehensive training for educators on using these technologies could help in better adopting and using the app. The investigation of the scalability of the proposed solution for larger groups of learners and diverse educational contexts will be crucial for the improvement of the performance of the implemented AI mechanisms and the personalization of learning activities.
Ethical considerations and privacy
The use of MAS in educational settings may raise some ethical considerations and privacy concerns. One major issue is data privacy. The system may, indeed, collect and process large amounts of personal data about learners, which must be safeguarded against unauthorized access and use. Ensuring compliance with data protection regulations (e.g., GDPR) is of paramount importance. There is also a need to address the ethical implications of using AI for decision-making in education. It is important to ensure that the algorithms used are transparent and free from biases that could unfairly disadvantage certain groups of learners.
Furthermore, the reliance on MAS and AI in general should not undermine the role of human educators. It should rather complement and enhance their capabilities. There should be clear guidelines on how the data is used, and learners should be informed and consent to the data collection processes. Learners should also be provided with mechanisms to review and contest any decisions made by the AI system that they perceive as unfair or incorrect.
Conclusion
The efficiency of learning has been addressed by intensive research works, where numerous learning stages have been defined and assessed. To improve this efficiency, efforts have particularly focused on identifying and monitoring learners’ learning styles. More specifically, several solutions have addressed the process of competency building by customizing the learning progress as per the capabilities and skills of each learner. However, in spite of the proven results, these solutions remain inflexible and slow, especially when adapting the competency building process to individual learners. To address these issues, we proposed in this paper a new framework for an intelligent management of the competency building process. Our framework is based on a recursive spiral Assess-Predict-Oversee-Transit model which is managed by a multi-agent system. This system includes several types of agents that predict, assess, coordinate, and monitor the learning activities depending on the learning styles and leaning stages of learners. The transitions to the right learning stages are then decided along with appropriate recommendations.
To showcase some of our ideas, we created an AR app prototype for the management of training activities addressing the maintenance of Air Conditioners (AC). Our prototype outlined some of the main concepts included in our framework. Nevertheless, several issues still require additional investigations and implementation. More precisely, we are planning to focus our future research works on implementing extended intelligent mechanisms for the assessment of the training performance. We are also planning to focus on testing our solution by involving several trainees with a different backgrounds and knowledge.
Availability of data and materials
The AR app and its related data/code are available from the corresponding author on reasonable request.
Abbreviations
- APOT:
-
Assess-Predict-Oversee-Transit
- AC:
-
Air Conditioner
- KPI:
-
Key Performance Indicator
- KPIA:
-
KPI Agent
- LSAA:
-
Learning Style Assessment Agent
- SoL:
-
Stage of Learning
- SoLI:
-
Stage of Learning Index
- SoLA:
-
SoL Agent
- SoLPA:
-
SoL Prediction Agent
- SoLOA:
-
SoL Overseeing Agent
- SoLTA:
-
SoL Transition Agent
- LSI:
-
Learning Style Inventory
- LI:
-
Learning Index
- CI:
-
Competency Index
- AR:
-
Augmented Reality
References
Abdel-Maksoud, B. M., & Saknidy, S. (2016). A new approach for training needs assessment. Journal of Human Resource and Sustainability Studies, 4, 102–109. https://doi.org/10.4236/jhrss.2016.42012
Abhari, K., Baxter, J. S. H., Chen, E. C. S., Khan, A. R., Petters, T. M., de Ribaupierre, S., & Eagleson, R. (2015). Training for planning tumour resection: Augmented reality and human factors. IEEE Transactions on Biomedical Engineering, 62, 1466–1477.
Adesunloye, B. A., Aladesanmi, O., Henriques-Forsythe, M., & Ivonye, C. (2008). The preferred learning style among residents and faculty members of an internal medicine residency program. Journal of the National Medical Association, 2008(100), 172–175.
Al Shaikh, A., Aldarmahi, A. A., AL-Sanie, E., Subahi, A., Ahmed, M. E., Hydrie, M. Z., & Al-Jifree, H. (2019). Learning styles and satisfaction with educational activities of Saudi Health Science University students. Journal of Taibah University Medical Sciences, 14(5), 418–424. https://doi.org/10.1016/j.jtumed.2019.07.002
Albeta, S. W., Haryati, S., Futra, D., Aisyah, R., & Siregar, A. D. (2021). The effect of learning style on students’ learning performance during the covid-19 pandemic. JTK (Jurnal Tadris Kimiya). 6. 115–123. https://doi.org/10.15575/jtk.v6i1.12603.
Allcoat, D., & Mühlenen, A. V. (2018). Learning in virtual reality: Effects on performance, emotion and engagement. Research in Learning Technology.
Alsalamah, A., & Callinan, C. (2021). Adaptation of Kirkpatrick’s four-level model of training criteria to evaluate training programmes for head teachers. Education Sciences, 11(3), 116. https://doi.org/10.3390/educsci11030116
Bifano, L. J. (2023). Exploring learning styles of university students involved in entrepreneurial activities, PhD Dissertation, Auburn University, Available at: https://etd.auburn.edu/handle/10415/8571 (last accessed in 18 Feb 2023).
Bodea, C. N., Toader, E.-A. (2013). Development of the PM competency model for IT professionals, base for HR management in software organizations. In 12th International conference on informatics in economy (IE 2013), education, research and business Technologies, Bucharest, April 2013.
Borich, C. D. (1980). A needs assessment model for conducting follow-up studies. Journal of Teacher Education, 31, 39–42. https://doi.org/10.1177/002248718003100310
Bowden, J. A., & Masters, G. N. (1993). Implications for higher education of a competency-based approach to education and training. Australian Government Publishing Service.
Bower, M., Howe, C., McCredie, N., Robinson, A., & Grover, D. (2014). Augmented reality in education: Cases, places and potentials. Educational Media International, 51, 1–15.
Camp, J., & Schnader, A. (2010). Using debate to enhance critical thinking in the accounting classroom: The Sarbanes-Oxley act and US Tax Policy. Issues in Accounting Education, 25(4), 655–675. https://doi.org/10.2308/iace.2010.25.4.655
Chung, R. G., & Lo, C. L. (2007). The development of teamwork competence questionnaire: Using students of business administration department as an example. International Journal of Technology and Engineering Education, 55–57.
Devi, V. R., & Shaik, N. (2012). Training & development: A jump starter for employee performance and organizational effectiveness. International Journal of Social Science & Interdisciplinary Research, 1, 202–207.
Dreyfus H. L., & Dreyfus S. E. (1986). Mind over machine. The power of human intuition and expertise in the era of the computer, Cambridge, England: Basil Blackwell Ltd.
Duignan, P. Introduction to strategic evaluation: Section on evaluation approaches, purposes, methods and designs. Introd. Strateg. Eval. 2001. Available online: http://www.parkerduignan.com/documents/104.htm. Accessed on 10 Jan 2023.
El-Sabagh, H. A. (2021). Adaptive e-learning environment based on learning styles and its impact on development students’ engagement. International Journal of Educational Technology in Higher Education, 18, 53. https://doi.org/10.1186/s41239-021-00289-4
Espinoza-Poves, J. L., Miranda-Vilchez, W. A., & Chafloque-Céspedes, R. (2019). The VARK learning styles among university students of business schools. Journal of Educational Psychology - Propositos y Representaciones, 7(2), 401–415.
Farjad, S. (2012). The evaluation effectiveness of training courses in university by Kirkpatrick model case study: Islamshahr University. Procedia - Social and Behavioral Sciences, 46, 2837–2841.
Harrow, A. (1972). A taxonomy of psychomotor domain: A guide for developing behavioral objectives. David McKay.
İlçin, N., Tomruk, M., Yeşilyaprak, S. S., et al. (2018). The relationship between learning styles and academic performance in TURKISH physiotherapy students. BMC Medical Education, 18, 291. https://doi.org/10.1186/s12909-018-1400-2
Kansal, J., & Singhal, S. (2018). Development of a competency model for enhancing the organisational effectiveness in a knowledge-based organisation. International Journal of Indian Culture and Business Management., 16, 287. https://doi.org/10.1504/IJICBM.2018.090909
Khandker, S. R., Koolwal, G. B., Samad, H. A. Handbook on impact evaluation quantitative methods and practices, 1st ed. ; The International Bank for Reconstruction and Development/The World Bank: Washington, DC, USA, 2010; ISBN 9780821380284.
Kirkpatrick, D. L. (1996). Great ideas revisited: Revisiting Kirkpatrick’s four-level model. Training and Development, 50(1), 54–57.
Kirkpatrick, D. L., & Kirkpatrick, J. D. Evaluating: Part of a ten-step process. In Evaluating training programs; Berrett-Koehler, Publishers: San Francisco, CA, USA, 2009; pp. 3–20. ISBN 9781576757963
Kolb, A. Y., & Kolb, D. A. (2019). The Kolb learning style inventory version 3.2. Hay Group, Boston, MA. https://infokf.kornferry.com/US-PS-Talent-NUR-2015-12-Catalog-lead-nurtures-N-America-LANG-EN-X1Y3_CATALOG_US_LTSITE_LP_LSI32.html
Kolligian, J., & Sternberg, R. J. (1990). Competence considered. Yale University Press.
Lahtinen, E., & Ahoniemi, T. (2005). Visualizations to support programming on different levels of cognitive development. In Proceedings of the fifth Koli calling conference on computer science education (pp. 87–94).
Lajis, A., Nasir, H. M., & Aziz, N. A.. (2018). Proposed assessment framework based on bloom taxonomy cognitive competency: Introduction to programming, pp. 97–101. https://doi.org/10.1145/3185089.3185149.
Malik, M., Amjed, S., & Hasani, S. (2021). COVID-19 and learning styles: GCET as case study. Computers, Materials & Continua., 680, 103–115. https://doi.org/10.32604/cmc.2021.014562
Maslow, A. (1943). A theory of human motivation. Psychological Review, 50(4), 370–396.
Masron, T. A., Ahmad Z. & Marimuthu, M. (2011). Positioning KIPs in the performance of universities. Universiti Sains Malaysia.
McNamara, G., Joyce, P., O’Hara, J. (2010). Evaluation of adult education and training programs. International Encyclopedia of Education, 548–554.
Menaka, P., & Nandhini, K. (2019). Performance of data mining classifiers on kolb’s learning style inventory (KLSI). Indian Journal of Science and Technology, 12, 23.
Misanchuk, E. R. (1984). Analysis of multi-component educational and training needs. Journal of Instructional Development, 7, 28–33. https://doi.org/10.1007/BF02905590
Murray, P. & Donegan, K. (2003). Empirical linkages between firm competencies and organizational learning. The Learning Organization, 10 (1), MCB:51–62.
Oates, T. (2003). Key skills/key competencies: Avoiding the pitfalls of current initiatives. In D. S. Rychen, L. H. Salganik, & M. E. McLaughlin (Eds.), Selected contributions to the 2nd DeSeCo symposium. Neuchatel, Switzerland: Swiss Federal Statistical Office.
Olivos, P., Santos, A., Martin, S., Cañas, M., Lazaro, E., & May, Y. (2016). The relationship between learning styles and motivation to transfer of learning in a vocational training programme. Suma Psicologica, 23(1), 25–32. https://doi.org/10.1016/j.sumpsi.2016.02.001
Paquette, G. (2010). Visual knowledge modeling for semantic web technologies. models and ontologies, IGI Global, USA, 2010, pp. 93–175.
Riener, C. & Willingham, D. (2010). The myth of learning styles. Change: The Magazine of Higher Learning, 42(5), 32–35. https://doi.org/10.1080/00091383.2010.503139
Romanelli, F., Bird, E., & Ryan, M. (2009). Learning styles: A review of theory, application, and best practices. American Journal of Pharmaceutical Education, 2009(73), 1–5.
Rothwell, W., & Kazanas, H. (1992). Mastering the instructional design process. San Francisco, CA: Jossey-Bass.
Russo, D. (2016). Competency measurement model.
Rutherford, P. D. (1995). Competency based assessment: A guide to implementation. Pitman Publishing.
Simsek, I., Kucuk, S., Biber, S. K., & Can, T. (2021). Development of an online teaching competency scale for university instructors. Open Praxis, 13(2), 201–212. https://doi.org/10.5944/openpraxis.13.2.137
Toader, E. A., & Brad, L. (2015). Competency assessment using key performance indicators. The International Journal of Academic Research in Business and Social Sciences, 5(2015), 75–86.
Topno, H. (2012). Evaluation of training and development: An analysis of various models. IOSR Journal of Business and Management, 5, 16–22.
Truong, H. (2016). Integrating learning styles and adaptive e-learning system: Current developments, problems and opportunities. Computers in Human Behavior, 55, 1185–1193. https://doi.org/10.1016/j.chb.2015.02.014
Windsor, J. A., Diener, S., & Zoha, F. (2008). Learning style and laparoscopic experience in psychomotor skill performance using a virtual reality surgical simulator. The American Journal of Surgery, 2008(195), 837–842.
Wong, S.-C. (2020). Competency definitions, development and assessment: A brief review. International Journal of Academic Research in Progressive Education and Development, 9(3), 95–11.
Yeung, J. F. Y., Chan, A. P. C., & Chan, D. W. M. (2009). A computerized model for measuring and benchmarking the partnering performance of construction projects. Automataion in Construction, 18, 1099–1113.
Zull, J. E. (2002). The Art of Changing The Brain: Enriching The Practice of Teaching by Exploring The Biology of Learning. SCHOLE: A Journal of Leisure Studies and Recreation Education, 24(1), 181.
Acknowledgements
Not applicable.
Funding
This work was supported and funded by the Research Cluster # R17075 of the Zayed University, United Arab Emirates as well as by the Ministry of Higher Education, Research, and Innovation (MoHERI) BFP/RGP/ICT/19/162.
Author information
Authors and Affiliations
Contributions
F. Outay: Investigation, Methodology, Writing – N. Jabeur: Conceptualization, AI, Writing – F. Bellalouna: AR app supervision, Revision – T. Al Hamzi: AR app development.
Corresponding authors
Ethics declarations
Ethical approval and consent to participate
Not applicable.
Consent for publication
On behalf of my co-authors, I am pleased to submit our manuscript “Multi-Agent System-Based Framework for an Intelligent Management of Competency Building" for publication as an original research article in your journal, Smart Learning Environments. The main focus of the paper is to improve the process of competency building during learning. This issue has been addressed by intensive research works, where the learning process is divided into several stages and learning activities are customized to the learners’ learning styles. However, despite their proven results, these solutions, which are difficult to implement for large groups of learners, are limited in identifying the right learning style for the right learner, during the right learning stage. They are also unable to autonomously manage transitions from one learning stage to another, without the need for long, hectic manual interventions. To address these issues, we are proposing in our paper a new framework for an intelligent management of the competency building process. Our framework is based on a recursive spiral Assess-Predict-Oversee-Transit model which is managed by a multi-agent system. This system includes several types of agents that predict, assess, coordinate, and monitor the learning activities depending on the learning styles and leaning stages of learners. The transitions to the right learning stages are then decided along with appropriate recommendations. To showcase some aspects of our solution, we created an Augmented Reality app using the Unity3D engine to train learners on Air Conditioner (AC) maintenance.
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Outay, F., Jabeur, N., Bellalouna, F. et al. Multi-agent system-based framework for an intelligent management of competency building. Smart Learn. Environ. 11, 41 (2024). https://doi.org/10.1186/s40561-024-00328-3
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s40561-024-00328-3