We present in this section a learning scenario model that will provide the operational context for recommendation and adaptation. This model is implemented using our graphical scenario editor, G-MOT and our compatible scenario manager and execution engine, TELOS. Basically, the scenario model aggregates actors, the activities they perform and the resources they use and produce during the flow of activities.
Scenario models for learning contexts
Figure 1 shows a simple scenario model. There are four activities or tasks (ovals), two actors (a professor and a student) and some resources (I/P linked) that are input to the activities or produced by the actor responsible for the activity (identified by a R link). Each activity is decomposed into sub-models (not shown on the figure), which describe it more precisely on one or more levels. This scenario will serve to illustrate the concepts presented in this paper.
In the first activity (Start), the student reads the general assignment for the scenario and the list of target competencies he/she is supposed to acquire. In the second one, using the information in a document called “Planet Properties”, the student builds a table of planet properties that is validated by the professor (in a MOOC, this should be automatically done by a software agent). In the third activity, using a version of this “Validated table”, he compares properties of planets to find out relations between them, and writes a text, “Validated relations”, to present his findings. In the last activity he is asked to order planets according to their distance to the Sun and to write his ideas on planets that can sustain life.
On the right side of the figure, three agents (i.e. recommenders or advisor agents) have been inserted to assist each student in executing the corresponding activity, by providing personalized advice. These agents, further explained in section “Recommendations based on competency comparison”, are also used to update each learner’s competency annotations with newly acquired competencies.
Semantic referencing of scenario components
As we have pointed out in (Paquette and Marino 2011), educational modeling languages and standards such as IMS-LD (2003) need to be improved with a structured knowledge and competency representation, in order to add useful semantic annotations to scenario components.
For semantic annotation, two main methods are generally used: annotation with concepts belonging to a taxonomy or annotation with natural language statements defining prerequisites and learning objectives.
We proposed to use a third approach based on competency referencing (Paquette 2007), where a competency can be understood as a skill applied to a knowledge, with a certain level of performance; the knowledge element in this definition being defined in a domain ontology.
A domain ontology needed for semantic annotations can be downloaded from ontology repositories available on the Web for various subjects. Only if the teacher/designer cannot find a suited one must he build it. Ontologies are “shared visualizations” of a knowledge domain, thus largely independent from a particular teacher’s view. What are specific are the skill and performance levels added to ontology components to define target competencies.
Unlike other approaches based on ontologies, such as OWL-OLM (Denaux et al. 2005) or Personal Reader (Dolog et al. 2004a), our model generalizes taxonomy-based annotations with OWL-DL ontologies annotations and adds mastery levels (generic skills and performance levels) to ontology references.
Furthermore, to state only that a person has to “know” a concept is an ambiguous statement. It does not say what exactly the person is able to do with the concept. For example, stating that someone “knows a certain device” may mean competencies ranging from “being able to describe its structure” to “being able to recognize its malfunction” to “being able to repair it”. Also, it is very different if a diagnosis is to be made in a familiar or novel situation, or with or without help; these are examples of performance indicators or criteria adding precision to the statement of a generic skill.
We thus define each competency as a triple (K, S, P) where K is a knowledge element, a class, a property or an individual from a domain ontology, S is a generic skill (a verb) from a taxonomy of skills, and P is the result of combining performance criteria values. Domain ontologies follow the W3C OWL-DL standard. The taxonomy of skills is simplified to a 10-level scale (0-PayAttention, 1-Memorize, 2-Explicitate, 3-Transpose, 4-Apply, 5-Analyze, 6-Repair, 7-Synthetize, 8-Evaluate, 9-Self-Control). The performance part is a combination of performance criteria values providing four performance levels (0.2-aware, 0.4-familiar, 0.6-productive, 0.8-expert), added to the skill level.
We have developed a Competency editor inside the TELOS tool set, to create and manipulate this kind of competency models. To connect competencies from a domain specific competency model with a learning scenario relating to this competency model, we developed another TELOS tool, called the Competency referencer. Using this tool, one can specify learners’ actual competencies, activities’ and resources’ entry and target competencies.
For example, using a domain ontology for solar system planets such as the one on Figure 2 and a competency model based on this ontology, competencies can be associated to resources from the scenario (as shown on Figure 3). The entry and target competencies describing such a resource, (“Planet Properties”), could be compared with the actual competencies of a user to verify if he has all of them, or some, or none, in his competency model, and offer a recommendation, such as to skip the resource (too easy for this user) or to study a preliminary document (this user doesn’t have the competences needed to study it, that is this resource resource entry competencies).
Activities, resources and user competency models
Selected components of a scenario are thus referenced using comparable competencies, based on the same domain ontologies and competency model. Resources and activities in a scenario are referenced by two sets of competencies, one for prerequisite competencies, and the other, for target competencies (i.e. learning objectives).
In (Paquette and Marino 2011), we provided a multi-actor ontology-based assistance model supported by a user competency model. This user model is composed of three main parts (Moulet et al. 2008):
-
List of the user’s actual competencies selected in one or more competency referentials. As mentioned above, each user’s competencies (C) is described by its knowledge (K), skills (S) and performance (P) components.
-
Documents (texts, exam results, videos, images, applications, etc.) structured into an e-portfolio that presents evidence for the achievement of related competencies.
-
Context in which a competency has been achieved. It includes the date of achievement, the activities that led to it, the link to the evidence in the e-portfolio and the evaluator of this evidence.
Figure 4 shows an example of a user portfolio that contains a list of competencies from two domains, instructional design and solar system planets. On the right side of the figure, evidences for the selected competency are shown as well as possible annotation or comments from an evaluator.