Skip to content

Advertisement

  • Research
  • Open Access

Utilizing a smart cognitive support system for K-8 education

Smart Learning Environments20185:17

https://doi.org/10.1186/s40561-018-0066-x

  • Received: 1 June 2018
  • Accepted: 7 September 2018
  • Published:

Abstract

This research is aimed to determine how cognitive apprenticeship framework could be utilized in designing and utilizing an ontology based cognitive support system (OBCSS). It is also aimed to evaluate the use of OBCSS in educational settings to determine how much mental effort is needed for students to use it and to determine whether use of OBCSS decreases disorientation and supports navigation as stated in literature. Scales are used to get perceived behavioral data from the participants. This data is used to investigate experiences of users who have gender and computer experience diversities. Personal information form, mental effort evaluation scale and perceived disorientation scale are used to collect data. Log data is collected from learners’ interaction with system. Sample of this study are fifth and sixth grade students at three state schools and one private school in Central Turkey. At the first phase, data is collected from 83 students, and this data is used for Reliability and Validity of the disorientation scale. At second phase data is collected from 89 students, and this data is used for the analysis of the OBCSS. System is evaluated by collecting data from real time users. According to results of statistical analyses, mental efforts of learners did not differ across gender, daily average computer and internet use, and frequency of internet use for studying. Similarly, perceived disorientation and perceived ease of use did not differ across gender, daily average computer and internet use and frequency for using internet to study. According to log data, OBCSS can provide different personalized learning paths based on the current knowledge of learners.

Keywords

  • Cognitive skill
  • Ontology
  • Personalization
  • Navigation support
  • Mental effort
  • Disorientation
  • Ease of use
  • Cognitive apprenticeship
  • Learning object

Introduction

As data on the web is constantly growing, it is a challenge for instructors and learners to reach out the resources appropriate to their needs, tasks or goals. Even if the resource is found, it is quite a challenge to further understand if requirements of teachers or students are met by using these resources. These problems were tackled with semantic web technologies in teaching and learning domain with the use of ontologies in educational environments (Aroyo & Dicheva, 2004; Cristea, 2004).

In early 2000s, some researchers claimed that using ontologies in artificial intelligence and education will help solve problems faced in searching process (Mizoguchi & Bourdeau, 2000). Later, researchers explained the use of ontologies in educational environments and how the ontologies will be utilized in teaching and learning process (Stojanovic et al., 2001; Brase & Nejdl, 2004; Lytras et al., 2003; Aşkar et al., 2007; Monachesi et al., 2008). Accordingly, the use of ontologies in educational environments can be classified in three groups:
  • Content access (Aroyo et al., 2002; Mitrovic and Devedzic, 2004; Lemnitzer et al., 2008; Lama et al., 2012),

  • Content creation (Simon et al., 2004; Castello and Gauthier, 2006; Boyce and Pahl, 2007; Oprea, 2011; Manganello et al., 2013)

  • Personalization (Henze et al., 2004; Karampiperis and Sampson, 2005; Fok, 2006; Vargas-Vera and Lytras, 2008; Chen et al., 2011)

Although research indicate that ontologies contribute teaching and learning process, there are still some deficiencies to address: First, contributions of ontologies to the process are not evaluated with experimental research in K-12 schools. Secondly, learners’ cognitive processes were merely considered in evaluation and the use of ontology. Considering learners’ cognitive profiles and expanding the research to K-12 educational settings will contribute the use of ontologies in teaching and learning process. Moreover, how ontologies would provide guidance to learners in their cognitive skill development remains unexplored. Providing an ontology-based teaching model and testing it in a K-12 setting would be the contribution of this study to the existing literature.

Therefore, the purpose of this research is to propose an ontology-based teaching framework to be used in educational settings. For this purpose, this research is aimed to determine how cognitive apprenticeship framework could be utilized in designing and implementing an Ontology Based Cognitive Support System (OBCSS). It is also aimed to evaluate the use of OBCSS in educational settings to determine how much mental effort is needed for students to use it and to determine whether use of OBCSS decreases disorientation and supports navigation as stated in literature. More specifically, this research sought answers to the following questions:
  1. 1.

    How can OBCSS be designed and developed to be used in cognitive skill instruction?

     
  2. 2.

    What kind of a learning path do learners build by using OBCSS?

     
  3. 3.

    By using OBCSS, do ease of use and perceived disorientation differ across gender and computer usage?

     

Theoretical framework

Cognitive apprenticeship model

Cognitive apprenticeship is a teaching model that stands on traditional apprenticeship, but also integrates with elements of school education (Brown et al., 1989). Purpose of cognitive apprenticeship is to visualize thinking process of a learning activity either in student’s or in teacher’s perspective (Dennen, 2004). Cognitive apprenticeship successfully integrates professional development of learners and supports them to construct their own thinking processes.

According to Ghefaili (2003), cognitive apprenticeship model is based on the following models and theories:
  • Socio-cultural Theory of Learning

  • Vygotsky’s Zone of Proximal Development

  • Situated Cognition

  • Traditional apprenticeship

Learning environments that use cognitive apprenticeship model shall have following items and functions (Brown et al., 1989; Collins et al., 1991):
  1. 1.
    Content: Indicates type of information that is necessary for expertise
    1. a.

      Domain Knowledge: Consists of concepts, procedures and facts related to a specific subject matter. Sources for this type of information are lecture books or lecture notes.

       
    2. b.

      Heuristic Strategies: Consists of necessary techniques and approaches to complete a task. Experts achieve this type of strategies by solving problems.

       
    3. c.

      Control Strategies: Controls the process to complete a task. These strategies are related to managing difficulties that are met during the process.

       
    4. d.

      Learning strategies: These strategies are used to learn domain knowledge, heuristic strategies and control strategies. General strategies to learn a new domain or specific strategies to learn complex tasks are example for these strategies.

       
     
  2. 2.
    Method: Represents the methodologies to increase development of expertise.
    1. a.

      Modelling: In this method, as expert performs a task, learner observes the process to complete the task and creates a conceptual model of the process.

       
    2. b.

      Coaching: In this method, while learner is performing a task, he/she shall be supported, by being given clues and reminded in order to increase his/her performance.

       
    3. c.

      Scaffolding: In this method, teacher shall help student during a task.

       
    4. d.

      Articulation: In this method, teacher encourages student to specify his/her ideas and knowledge related to subject matter

       
    5. e.

      Reflection: In this method, students compare their problem solving process to an expert or their friends.

       
    6. f.

      Exploration: In this method, teacher guides students to solve problems by themselves.

       
     
  3. 3.
    Sequencing: Represents sequencing of learning activities
    1. a.

      Increasing Complexity: Tasks are ordered in a way that they require more skills and more concepts in every step.

       
    2. b.

      Increasing Diversity: Tasks are ordered in a way that they require more diversity of strategies and skills

       
    3. c.

      Global before local skill: Before parts of the task are completed, general conceptual model of the task shall be provided

       
     
  4. 4.
    Sociology: Sociology is related to social attributes of learning environment
    1. a.

      Situated Learning: Student’s environment that he/she performs tasks or solves problems shall be identical to real world environment for related tasks and/or problems

       
    2. b.

      Community of practice: Students shall communicate each other constantly in the learning environment

       
    3. c.

      Intrinsic motivation: Students shall construct their own personal goals to achieve skills and solutions

       
    4. d.

      Exploiting cooperation: Students shall study together to support their cooperative problem-solving skills

       
     

In cognitive apprenticeship literature, this model is successfully used e-learning environments beginning in 1990s (Murray et al., 2003; Liu, 2005; Dickey, 2008; Lee, 2011; Chandra and Watters, 2012; Khousa et al., 2015, Tawfik et al., 2018). San Chee (1995) used cognitive apprenticeship for teaching SMALLTALK programming language and stated that this application is defined as user friendly and supportive by students. In a more recent study, Kuo et al. (2012) created a web based environment to investigate if cognitive apprenticeship model improved cooperative learning skills and indicated that cooperative learning with cognitive apprenticeship model was very useful for field dependent students. In another research, Tsai et al. (2012) created a cognitive apprenticeship based system to improve argumentation cognitive skill. Results of the study show that students’ argumentation performance was improved by having used this web based cognitive apprenticeship system.

Adaptive hypermedia systems

An adaptive hypermedia system functions based on a user model that contains and utilizes the model to adapt the system for the user (Brusilovsky, 1996). It is believed that students access information as quickly as possible without cognitive overloading in an adaptive hypermedia environment. Brusilovsky (1996) stated that adaptive learning systems would have three criteria to provide this functionality:

  • Adaptive learning system shall be a hypertext or hypermedia system.

  • System shall have a user model

  • System shall adapt content according to user model that it contains.

Figure 1 shows classic structure of an adaptive learning system.
Fig. 1
Fig. 1

“User modeling – adaptation” loop in adaptive systems, (Brusilovsky, 1996)

Adaptation is constructed in two phases: At the first phase, a user model is created. In this phase, as user interacts with the system, user information is gathered and kept in the user model. So adaptation is made according to this information in user model. As the user interacts with the system, the user model is updated accordingly. Adaptation can be made through information about the users based on their goals, preferences or needs (Martins et al., 2008; Ouf, Abd Ellatif, Salama, & Helmy, 2017; Wan & Niu, 2018).

At the second phase, adaptation is realized. In this phase content/presentation and/or navigation is adapted (Brusilovsky, 1996). Content/presentation can be adapted in five methods (Brusilovsky, 1996):
  • Additional explanations: In this method explanations are given user according to his/her level of knowledge. System hides too complicated or too simple information from user.

  • Prerequisite explanations: System gives prior information about a concept before presenting main information.

  • Comparative explanations: System emphasizes concepts related and/or unrelated with the concept that will be taught

  • Explanation variants: Explanations are varied according to user model by detailing page or section content

  • Sorting: Information that will be presented to user is sorted according to user history or level of information

Navigation can be adapted in four ways (Brusilovsky, 1996; De Bra and Calvi, 1998):
  • Direct guidance: Next presentation that will follow the current one is linked with a “Next” button.

  • Sorting: As in content adaptation, content is sorted based on users’ level of knowledge.

  • Hiding: Links that are not suitable for user is hided in current page.

  • Map adaptation: The content that has been studied by user is marked among the others in a graphical way.

AHAM reference model

In this research, OBCSS is based on AHAM (Adaptive Hypermedia Application Model) reference model which is one of the highly accepted models in the literature (Knutov et al., 2009; Papadimitriou, 2017). AHAM reference model is emerged as a base model in adaptive hypermedia design (De Bra et al., 1999). In this model basic components of adaptive hypermedia are defined as domain model, user model and adaptation model (Fig. 2).
Fig. 2
Fig. 2

AHAM Referans Modeli (De Bra et al., 1999)

In AHAM reference model, Runtime Layer keeps system working pages and interaction with user so that progress of user can be monitored. Presentation specifications layer updates interface model according to user status. Within-component layer, materials that will be presented to user will be kept, and it uses anchoring layer to communicate with storage layer.

In AHAM, it is important to use users’ actions to update the model, so that adaptation and presentation of the content could be made appropriate to user needs. For this purpose, adaptation rules are used. These rules also use information stored in the user model, to find relevant user model updates. Domain model cooperates with the user model to specify user’s knowledge about the current subject matter and to perform the adaptation of the content. Here pedagogical rules help the content adaptation.

Mental effort

Salomon (1984) defined mental effort as numbers of non-automatic elaborations to solve a problem or to learn a topic. Considering this definition, people can easily complete tasks that they get used to, with a little mental effort, and they use more mental effort for new or complicated tasks (Clark et al., 2006; Kalyuga, 2009).

In some of research, perceived total cognitive load is measured in a likert scale tool, which was merely named cognitive load (Van Merriënboer et al., 2006). It is assumed that, mental effort reflects total cognitive load as it refers to cognitive capacity as a dimension of cognitive load (Paas et al., 2003). Mental effort is measured with personal scoring tools and psychological measures (such as heart beat rate, brain activity, eye activity –blink rate, pupil enlargement-), but personal scoring tools are commonly used as it is easier to use (Paas et al., 2003; Van Merriënboer, et al., 2006; De Jong, 2010; Lin & Kao, 2018). Eye tracking (Wang, Tsai, & Tsai, 2016) and personal scoring tools are used because it is assumed that people can observe their own cognitive processes and score their mental effort (Paas et al., 2003).

Paas et al. (2005), researched relation between mental effort and learner motivation and developed a new methodology to calculate and visualize the effect of teaching conditions on learner motivation. As a result, learner group with higher prior knowledge, scored higher participation scores (higher performance and lower mental effort). In another research, Poehnl and Bogner (2013) used “alternative conceptions” and focused the effect of mental effort on conceptual change process. Researchers observed no raise of learning in group taught with alternative conceptions method.

Disorientation

Disorientation is defined as the feeling of being lost during navigation process in hypertext environments (Cangöz and Altun, 2012; Bhatti, Ismaili, & Dhomeja, 2017). According to Conklin (1987), source of disorientation is a design problem, which occurs during creation of paths in hypertext environment. People feel disoriented while navigating if they don’t know where to go next, if they know where to go next but they don’t know how to go or if they cannot make any connection between their current location and whole document structure (Edwards and Hardman, 1989). Thus, people feel disappointed and they lose their interest keeping up with the content in the environment (McDonald and Stevenson, 1998).

Most of the disorientation research state that users experience such disorientation in web-based environments, and further suggest that users employ several strategies to prevent themselves from getting lost, among which are using linear navigation (McDonald and Stevenson, 1996); utilizing navigational support such as bookmarks, history lists or maps (Dias et al., 1999). In addition, content designers are advised to employ paged navigation (Lee, 2005); navigational support based on users’ prior knowledge (Amadieu et al., 2009); and, content suggestions utilizing users’ computer experience (Shih et al., 2012).

Ease of use

Ease of use is the perception of using an innovation without putting too much effort on it (Davis, 1989; Rogers, 2003). Ease of use as a construct is an important factor to make users adopt an application (Davis, 1989) and is commonly used in evaluating designs of web-based teaching/learning environments (Revythi & Tselios, 2017). Liaw (2008) stated that satisfaction and perceived ease of use have positive effect on use of e-learning environments. Sun et al. (2008) pointed that perceived ease of use is a critical factor that effects satisfaction. Ali et al. (2013) empirically proved their adoption model and stated that ease of use and perceived usefulness has a significant effect on adoption.

Methodology

This section is divided in two parts: In the first section, design process of OBCSS is explained. In the second section, research model, participants, instruments and statistical analyses in the evaluation of OBCSS effectiveness are described.

Design of ontology based cognitive support system

Design of OBCSS is based on CogSkillNet- cognitive skill ontology which is developed by Aşkar and Altun (2009). CogSkillNet is a domain ontology that includes cognitive skills in K12 curriculum. CogSkillNet also aims to increase the use of learning objects by representing cognitive skills with ontology based on the relationship between cognitive skills and learning objects (Aşkar and Altun, 2009).CogSkillNet class and instances are given in (Fig. 3).
Fig. 3
Fig. 3

CogSkillNet class and instances (Aşkar and Altun, 2009)

Design of adaptive hypermedia

OBCSS is designed according to AHAM. OBCSS has a domain model, a user model, an adaptation model, a presentation specifications layer and a within-component layer. Domain model is based on CogSkillNet cognitive skill ontology (Aşkar and Altun, 2009). Cognitive skills and relations among skills are defined in ontology. Learning objects and cognitive skill relationship is also defined in ontology so that relationships can be queried within CogSkillNet. As a result of this query, cognitive skills that will be presented to learners are determined so that learning objects that are related to these skills are presented to user.

User model in OBCSS is referred to learner model and is based on the learner model that is suggested by Kaya and Altun (2011). According to this model, in order to determine learners’ prior knowledge, OBCSS applies a pre-test to users at first. According to pre-test results, learner model is updated and the rest of the interaction between system and learner is added to the learner model.

Adaptation model, as the name suggests, adapts the content for learners based on the pre-test results by using content adaptation techniques as suggested by Brusilovsky (1996). The model suggests the system check for the wrong answers in pre-test and suggest related content for learners to study according to his/her wrong answers. Adaptation model also uses all defined techniques presented in section “Adaptive Hypermedia Systems”. Adaptive Hypermedia Systems.

Modelling OBCSS with cognitive apprenticeship framework

Previous research indicated that cognitive apprenticeship model can be used in web-based learning environments as a framework (Brown et al., 1989; Dennen, 2004; Wang and Bonk, 2001). OBCSS utilizes cognitive apprenticeship model as a framework and applies the related components in this framework. Elements that are used in OBCSS are summarized in Table 1.
Table 1

Use of cognitive apprenticeship items

Cognitive Apprenticeship Framework Item

Representation in OBCSS

Content

• Domain knowledge is modelled with concepts and relations between concepts defined CogSkillNet.

• Content is defined by learning objects and their relation with cognitive skills in CogSkillNet.

• Heuristic strategies are used to find cognitive skills that are related to other cognitive skills in CogSkillNet.

• Control Strategies are used to check learners’ errors and make them practice to fix these errors.

Method

• Modelling technique is used to make learners practice cognitive skills by using learning objects that have application presentation strategy

• Coaching and Scaffolding techniques are used to forming thinking processes of learners by making them study the cognitive skills that they gave wrong answers in pre-test. Learning objects that have presentation strategy lecturing of lecturing are used for this process

• Reflection and Articulation techniques are used to make learners comment on anything during their study about environment or learning process.

Sequencing

• Sequencing technique is used to sequence cognitive skills in mandatory study or practice phase by an order of increasing complexity and increasing diversity

Design of OBCSS software

OBCSS software design has been started with defining use cases and modules. A total of six modules are designed for OBCSS: Authentication module checks user if registered. Learner module is responsible for keeping and updating the learner model. Adaptation module adapts content after pre-test and practice phases for each user. Navigation module is responsible for displaying modules and pages to users after login. Database module is an interface for all modules to database. Ontology module is the interface for CogSkillNet to all modules.

OBCSS software is designed using UML (Unified Modeling Language) (UML, 2013). UML activity diagrams and sequence diagrams are used to represent the interaction between modules and work flow. It is decided to use Java programming language and Eclipse software development environment for software development. OBCSS is designed as a web-based application to serve multiple users at once (Fig. 4). Vaadin java framework is used for development and Apache Tomcat 7.0 is used as a web server.
Fig. 4
Fig. 4

General OBCSS Design

Jena semantic web framework is used to run SPARQL queries on CogSkillNet. MySQL (MySQL, 2013) database server is used for OBCSS database.

First version of CogSkillNet is developed using OWL 1 version. During development of OBCSS, CogSkillNet ontology is also upgraded to OWL 2 version to meet some missing functionalities like property chains, extended data types or reflexive properties.

Evaluation of ontology based cognitive support system effectiveness

Design

This research is designed as causal comparative research. Causal comparative research aims to investigate cause and effect of differences between groups or aims to define causes of a situation or factors that affect causes of this situation (Johnson and Christensen, 2008). Causal comparative research tries to define cause and effect relations. By this point of view, Causal comparative research reminds experimental researches, but unlike experimental research, researchers do not interfere the situation that is investigated.

Participants and data collection process

Participants to this study were 89 middle school students. They were voluntarily invited to participate in this study from three state schools and one private school in a metropolitan city in central Anatolia. Once a consent form is obtained, 83 middle school students had been asked to join the first phase of the research, which was to be designed to establish the validity and reliability of perceived disorientation scale. At the second phase, 89 students were invited to use the system in their mathematics, biology, and Turkish lessons. All the participants were agreed to answer surveys, an all of them completed surveys. Effective response rate of the surveys is %100. The content presented with this tool was mainly to provide instruction on cognitive skills so that they could explore each cognitive skill and apply them in their courses. This part of data collection process was completed in 2 weeks and the data obtained from OBCSS use was analyzed to explore the research questions.

Instruments

In this research four data collection tools are used (Demographic information form, mental effort evaluation scale, perceived disorientation scale, and user logs-saved by OBCSS). Demographic information form is used to obtain age, computer use and frequency etc. data. The other tool, Mental Effort Evaluation Scale is developed by Zijlstra (1993) and adapted to Turkish by Çevik (2012). It is a nine-item scale and 1 stands for “extreme mental effort” and 9 stands for “no mental effort” when completing the task. Perceived disorientation scale, developed by Ahuja and Webster (2001) and is adapted to Turkish by Cangöz and Altun (2012). It is a five–Likert Scale from “Strongly agree” to “Strongly disagree”. There are 10 statements in the scale, first seven questions measure disorientation and last three questions measures ease of use. User logs included navigation patterns (visited learning objects, related cognitive skills, etc.), as well as screen behavior data (i.e., time spent on each page, visits, revisits, etc.).

Analysis

Data is analyzed in two phases: First descriptive analysis is run through scale data by using t-tests and ANOVA (Analysis of Variance) for gender, daily average computer use and daily average internet use. Second, log data is evaluated by content analysis based on students’ comments.

Perceived disorientation scale was originally developed for graduate students. Therefore, a confirmatory factor analysis is applied to validate the tool to be used with secondary school students. LİSREL 8.7 is used and finally fit indexes are acquired as: [χ2 (34, N = 83) = 37.57, p = .22, RMSEA = 0.046, S-RMR = 0.062, GFI = 0.92, AGFI = 0.86, CFI = 0.98, NNFI = 0.97, IFI = 0.98]. These indexes show that data shows acceptable and/or perfect fit. To specify the consistency of answers of the scale, reliability analysis is checked with SPSS 17.0 and Cronbach Alfa is found to be 0.86 for disorientation and 0.87 for ease of use. These values show that scale is highly reliable. Lastly, log data is collected through interaction of students with OBCSS.

Findings

Implementation of ontology based cognitive support system

OBCSS consists of six modules: Authentication module, learner module, adaptation module, navigation module, database module and ontology module. The algorithm and the flowchart representing the interactions between these modules is presented as follows (Fig. 5):
  1. 1.
    First student logs in by using user name and password.
    1. a.

      If user name and/or password is wrong return to step 1

       
     
  2. 2.

    Navigation module decides which page will be displayed to user

     
  3. 3.

    If pre-test is not applied, adaptation module displays pre-test

     
  4. 4.

    If pre-test is applied and mandatory study is not completed, cognitive skills related to wrong answers are queried from CogSkillNet and mandatory study is prepared

     
  5. 5.

    If pre-test and mandatory study is completed, practice is applied

     
  6. 6.

    If practice is completed, cognitive skills in CogSkillNet are queried and independent study is prepared

     
  7. 7.

    All interaction data of user is recorded to database

     
  8. 8.

    Learner module is an interface to database and ontology related queries.

     
Fig. 5
Fig. 5

Interaction of OBCSS Modules

As the OBCSS language is Turkish, the following screenshots (Figs. 6, 7, 8 and 9) are provided in original language of system with a translation of the Figs. A pre-test question is represented in Fig. 6. As seen in this screenshot, students are asked about questioning as a cognitive skill. Question is about a research to understand blood circulation in human body. Research questions are what is blood circulation, where to start, where to end, how blood is purified in body? So, students are asked to identify the related cognitive skills to these questions. The button at top left is used to add comments, and the button at top right is used to log out. Buttons at bottom right are used to navigate previous and next questions.
Fig. 6
Fig. 6

Pre-test question related to query skill

After administering the pre-test, results are displayed to user Fig. 7. In this screenshot wrong answers are listed along with the related cognitive skills. From here, OBCSS proposes students to study both wrong cognitive skills and other skills that are ontologically related to wrong ones.
Fig. 7
Fig. 7

Pre-test results

A practice item is presented in Fig. 8. In this figure, a practice learning object related “to divide” presented to students. In this learning object, whole-piece relation is presented. Title of this page is “Task 2” and button is used to navigate to next task.
Fig. 8
Fig. 8

Practice learning object related to divide skill

Independent study interface is shown in Fig. 9. Title of this page is “Study at your own pace”. System lists all cognitive skills to students. On the left side of the page, cognitive skills and sub skills are presented. On the right side of the page, learning object related to selected skill is presented. On this screenshot, “to separate” as a skill is selected and learning object content is about this skill and the ontologically related ones.
Fig. 9
Fig. 9

Free study interface

Navigation process is represented in (Fig. 10).
Fig. 10
Fig. 10

Whole navigation process of learner in OBCSS

Testing of ontology based cognitive support system

OBCSS software is verified through unit tests (Patton, 2005; Myers et al., 2011), acceptance test (Hsia et al., 1997; Myers et al., 2011) and beta test (Patton, 2005). Unit tests are created and run in development process. After development process has finished, acceptance test is run and beta test of OBCSS is run with eight sixth grade students of a state school in Turkey.

Data analysis and interpretation

In this section t-tests and ANOVA results are listed for mental effort, disorientation and ease of use through “gender”, “daily average computer use”, “daily average internet use” and “frequency of studying using internet” variables. Daily average internet use is internet usage of a student for general purposes (e.g. online games, online video, social media, online music, chat etc.). Frequency of “studying using internet” refers to the internet usage of a student for only study purposes.

Mental effort, perceived disorientation, and gender

To investigate the effect of gender on mental effort and perceived disorientation, t tests were applied (Tables 2 and 3 respectively). In order not to make Type I error, Bonferroni correction adjustment was made.
Table 2

T test results for effect of gender to mental effort

 

Gender

N

Mean

Standard Deviation

t

p

Mental Effort

Girl

39

4.79

2.45

0.651

0.517

Boy

50

4.42

2.87

Table 3

T test results for effect of gender to perceived disorientation

 

Gender

N

Mean

Standard Deviation

t

p

Perceived disorientation

Girl

39

2.59

0.83

1.625

0.108

Boy

50

2.27

0.96

Ease of use

Girl

39

3.47

0.94

−1.120

0.266

Boy

50

3.71

1.07

According to the results as presented in Tables 2 and 3, no significant difference between gender was found in mental effort (t = 0.651, p > 0.05), perceived disorientation (t = 1.625, p > 0.05), and ease of use (t = − 1.120, p > 0.05) in their use of OBCSS .

Mental effort and daily average computer use

To investigate effect of daily average computer use to mental effort, ANOVA is applied (Table 4).
Table 4

ANOVA results for effect of daily average computer use to mental effort

 

N

Mean

Standard Deviation

F

p

Less than 1 h

43

4.56

2.49

4.547

0.424

Between 1 and 3 h

30

4.10

2.61

Between 3 and 5 h

9

3.78

2.68

More than 5 h

7

7.86

2.26

Total

89

4.58

2.68

According to results in Table 4, no significant difference was found between groups of daily average computer use, in regard to mental effort F(3, 85) = 4.547, p = 0.424.

Mental effort and daily average internet use

To investigate effect of daily average internet use to mental effort, ANOVA is applied (Tables 5).
Table 5

ANOVA results for effect of daily average internet use to mental effort

 

N

Mean

Standard Deviation

F

p

Less than 1 h

48

4.19

2.39

2.227

0.091

Between 1 and 3 h

25

4.52

2.74

Between 3 and 5 h

9

5.11

3.58

More than 5 h

7

6.86

2.47

Total

89

4.58

2.68

According to results in Table 5, no significant difference was found between groups of daily average internet use, in regard to mental effort F(3, 85) = 2.227, p = 0.091.

Mental effort and frequency of studying using internet

To investigate effect of frequency of studying using internet to mental effort, ANOVA is applied (Tables 6).
Table 6

ANOVA results for effect of frequency of studying using internet to mental effort

 

N

Mean

Standard Deviation

F

p

Once a week

22

4.91

2.77

1.229

0.305

Once in 5 days

6

2.50

2.07

Once in 3 days

17

4.88

2.47

Once in 2 days

23

4.26

2.37

Every day

21

4.95

3.12

Total

89

4.58

2.68

According to results in Table 6, no significant difference was found between groups of frequency of studying using internet, in regard to mental effort F(4, 84) = 1.229, p = 0.305.

Perceived disorientation and daily average computer use

To investigate effect of daily average computer use to perceived disorientation, ANOVA is applied Table 7.
Table 7

ANOVA results for effect of daily average computer use to perceived disorientation

  

N

Mean

Standard Deviation

F

p

Perceived disorientation

Less than 1 h

43

2.45

0.96

1.261

0.293

Between 1 and 3 h

30

2.50

0.92

Between 3 and 5 h

9

1.85

0.84

More than 5 h

7

2.46

0.52

Total

89

2.41

0.91

Ease of use

Less than 1 h

43

3.53

1.05

0.316

0.814

Between 1 and 3 h

30

3.60

0.85

Between 3 and 5 h

9

3.74

1.58

More than 5 h

7

3.90

0.59

Total

89

3.60

1.01

According to results in Table 7, no significant difference was found between groups of daily average computer use, in regard to perceived disorientation (F(3, 85) = 1.229, p = 0.305) and ease of use (F(3, 85) = 0.316, p = 0.814).

Perceived disorientation and daily average internet use

To investigate effect of daily average internet use to perceived disorientation, ANOVA is applied Tables 8).
Table 8

ANOVA results for effect of daily average internet use to perceived disorientation

  

N

Mean

Standard Deviation

F

p

Perceived disorientation

Less than 1 h

48

2.46

0.89

0.848

0.471

Between 1 and 3 h

25

2.48

1.02

Between 3 and 5 h

9

1.95

0.90

More than 5 h

7

2.42

0.67

Total

89

2.41

0.09

Ease of use

Less than 1 h

48

3.70

0.87

1.952

0.127

Between 1 and 3 h

25

3.29

1.07

Between 3 and 5 h

9

4.14

1.24

More than 5 h

7

3.38

1.22

Total

89

3.60

1.01

According to results in Table 8, no significant difference was found between groups of daily average internet use, in regard to perceived disorientation (F(3, 85) = 0.848, p = 0.471) and ease of use (F(3, 85) = 1.952, p = 0.127).

Perceived disorientation and frequency of studying using internet

To investigate effect of frequency of studying using internet to perceived disorientation, ANOVA is applied (Table 9).
Table 9

ANOVA results for effect of frequency of studying using internet to perceived disorientation

  

N

Mean

Standard Deviation

F

P

Perceived disorientation

Once a week

22

2.29

0.69

0.767

0.550

Once in 5 days

6

2.09

0.68

Once in 3 days

17

2.26

0.79

Once in 2 days

23

2.54

0.90

Every day

21

2.61

1.24

Total

89

2.41

0.09

Ease of use

Once a week

22

3.60

1.11

0.745

0.564

Once in 5 days

6

3.94

0.49

Once in 3 days

17

3.70

0.95

Once in 2 days

23

3.72

0.86

Every day

21

3.30

1.22

Total

89

3.60

1.01

According to results in Table 9, no significant difference was found between groups of frequency of studying using internet, in regard to perceived disorientation (F(4, 84) = 0.767, p = 0.550) and ease of use (F(4, 84) = 0.745, p = 0.564).

Interaction with OBCSS and navigation processes

Navigation paths

Learners are provided different learning paths based on their pre-test results in OBCSS. Cognitive skills are queried from CogSkillNet to create a personalized learning experience for each learner. In this section, a learner’s navigation path is described with a working example.

The navigation path is personalized for each user based on their pre-test scores through queries from the ontology. System analyzes the correct and wrong answers and maps each wrong answer with the related cognitive skill in the ontology. Such query yields the related concepts from the cognitive skill ontology and presents the learners as a navigation path along with the related learning objects. The system offers mandatory study and practice phases, first. Learners then can continue with study at your own pace section by listing, recording and selecting the other cognitive skills (Fig. 11).
Fig. 11
Fig. 11

A learner’s navigation path with faulty answers in pre-test

User comments

While studying in OBCSS, students are encouraged to comment whatever they want on any phase of study. This process is also a part of cognitive apprenticeship model. In this research, 38 unique students added a total of 38 comments during their study. Twenty-four of these comments are related to system, pre-test questions and learning environment. Five of these comments are related to practice, 7 of these are about difficulties and/or relatedness of questions, 3 comments are about lack of hardware and 9 comments are suggestions about system. Fourteen comments consist of meaningless words and/or unrelated words/sentences.

Discussion

In this research, an ontology-based e-learning environment is created to be used in teaching cognitive skills by utilizing cognitive apprenticeship framework. In following sections, results of using OBCSS are discussed. Students having used OBCSS indicated that they did not get disoriented or did not put too much mental effort while using the system. In literature, the cognitive apprenticeship model has been suggested by researchers with a claim that such methodology make the content more effective for students to learn (Seel and Schenk, 2003; Wang and Bonk, 2001; Dickey, 2008). Dickey (2008) also stated that modeling, coaching, scaffolding, and exploration steps are essential to foster knowledge acquisition. In this research it is found that use of cognitive apprenticeship methodologies embedded in the web environment fostered the integration of technology for teaching and learning. According to statistical results, no significant difference was found across gender or computer experience for mental effort. This means that OBCSS can offer a personalized learning environment according to learners’ needs and offer a learning experience without requiring them to invest too much mental effort. In other words, it can be said that even if learners meet a new system and new tasks, most of the students can finish tasks without sustaining too much mental effort.

Most of the students did not perceive disorientation while using OBCSS. Previous research indicated that learners may get disoriented while studying on the web, regardless of users’ experiences in computer use (Altun, 2000; Altun, 2003). Therefore, it is important to make appropriate design choices for learners not to have them disoriented when studying cognitive skills. According to statistical results, no significant difference was found on gender or computer experience for perceived disorientation. This means that OBCSS can offer a personalized learning environment according to learners’ needs and offer a learning experience without making them disoriented.

In this research, system’s ease of use is also measured. According to the findings, most of the students considered system as easy to use. According to statistical results, no significant difference was found on gender or computer experience for system’s ease of use. This means that OBCSS is an easy to use system and can offer a personalized learning environment according to learners’ needs regardless of their gender nor their previous computer experience.

The findings of this study also indicated that learners could be guided through different learning paths based on their pre-test results. These paths are calculated based on their prior knowledge and their learning preferences. It is revealed that complexity of learning path is directly related with complexity of cognitive skills that construct this path.

CogSkillNet, cognitive skill ontology, could be used effectively when creating online courses to support and/or assess learners, and management of learning environments by integrating semantic web technologies and utilizing ontologies in learning environments (Anderson and Whitelock, 2004; Koper, 2004). Also, ontology use makes it possible to have relational searches and query new materials (Gasevic and Hatala, 2006; Brooks and McCalla, 2006; Shafrir and Etkind, 2006; Kaya and Altun, 2009; Yessad et al., 2011) and ontologies provide interoperation of different databases or repositories to search several sources at once (Dicheva and Aroyo, 2006). Ontologies also contributes learning domain by learner profile generation and suggesting proper materials to learners (Devedzic, 2006; Gaeta et al., 2009; Yessad et al., 2011) and generating user/learner models (Denaux et al., 2005; Sosnovsky and Dicheva, 2010; Kaya and Altun, 2011; Poulovassilis et al., 2012). As the primary purpose of this research is to assess design and development of OBCSS and to validate usage in a school environment, student success in learning cognitive skills using OBCSS is not evaluated in paper. Further research could explore this issue as an experimental design research.

To conclude, it is essential for ontology-based web applications to have access to learning object repositories to provide learners personalized and richer learning experiences. Metadata standards and their unique labels are essential part of ontology-learning integration. If not done automatically with certain algorithms, matching of concepts in ontology with learning object will have a serious workload. Further research is needed to validate automatic ontology mapping and matching.

In this research, OBCSS is used in teaching cognitive skills. To use OBCSS as a support tool at courses, concept and curriculum ontologies and integration of these ontologies are required, but first challenges and difficulties shall be addressed and solved for developing educational ontologies (Kaya and Altun, 2018). Domain specific ontologies in the field of education must increase and such ontologies would be possible via automatic ontology generation with contribution of data mining or automatic text extraction domains. By the increase of learning objects, domain and curriculum ontologies, all K-12 curriculum can be included in OBCSS.

Abbreviations

AHAM: 

Adaptive Hypermedia Application Model

ANOVA: 

Analysis of Variance

OBCSS: 

Ontology Based Cognitive Support System

UML: 

Unified Modeling Language

Declarations

Acknowledgements

This study is supported in part by the Scientific and Technological Research Council of Turkey (TUBITAK) with the project number “SOBAG 110 K 602”.

Funding

This study is supported in part by the Scientific and Technological Research Council of Turkey (TUBITAK) with the project “SOBAG 110 K 602”.

Availability of data and materials

Analyzed data is provided with the article.

Authors’ contributions

This study is a part of doctoral dissertation of GK. AA is supervisor of doctoral dissertion and this article. Both authors read and approved the final manuscript.

Authors’ information

Galip Kaya, Ph.D.

Dr. Kaya is a software development team leader at HAVELSAN Inc., and has a BA in Computer Engineering and MA and PhD in Computer Education and Instructional Technologies. He has been working on developing education related ontologies and working as a software engineer since 2004. His main research areas include development, implementation, and evaluation of ontologies for educational settings. He has taken part in various research projects and published research in national and international journals, conferences, and in edited books.

Arif ALTUN, Ed.D.

Arif Altun is a professor of computer education and instructional technologies at the College of Education at Hacettepe University, Ankara Turkey. He has completed his doctorate study at the University of Cincinnati. He has been a visiting scholar at the psychology department at University of Wisconsin-Eau Claire in 2011 and at the Mind, Brain, and Education program at Harvard University during 2014–2015. In the last 10 years, he has been conducting research exploring the cognitive processes in digital learning environments at OntoLAB, to develop user modeling for ontology driven personalized learning environments.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
HAVELSAN Inc., Mustafa Kemal Mahallesi 2120 Cad. No:39 P.K.: 06510, Çankaya, Ankara, Turkey
(2)
College of Education, Hacettepe University, Ankara, Turkey

References

  1. J.S. Ahuja, J. Webster, Perceived disorientation: An examination of a new measure to assess web design effectiveness. Interact. Comput. 14(1), 15–29 (2001)View ArticleGoogle Scholar
  2. L. Ali, M. Asadi, D. Gašević, J. Jovanović, M. Hatala, Factors influencing beliefs for adoption of a learning analytics tool: An empirical study. Comput. Educ. 62, 130–148 (2013)View ArticleGoogle Scholar
  3. A. Altun, Patterns in cognitive processes and strategies in hypertext reading: A case study of two experienced computer users. Journal of Educational Multimedia and Hypermedia 9(1), 35–55 (2000)MathSciNetGoogle Scholar
  4. A. Altun, Understanding hypertext in the context of reading on the web: Language learners' experience. Current Issues in Education 6(12) (2003)Google Scholar
  5. F. Amadieu, A. Tricot, C. Mariné, Prior knowledge in learning from a non-linear electronic document: Disorientation and coherence of the reading sequences. Comput. Hum. Behav. 25(2), 381–388 (2009)View ArticleGoogle Scholar
  6. T. Anderson, D. Whitelock, The educational semantic web: Visioning and practicing the future of education. J. Interact. Media Educ. 2004(1) (2004)View ArticleGoogle Scholar
  7. L. Aroyo, D. Dicheva, The new challenges for E-learning: The educational semantic web. Educ. Technol. Soc. 7(4), 59–69 (2004)Google Scholar
  8. L. Aroyo, D. Dicheva, A. Cristea, in Intelligent Tutoring Systems, ed. by S. Cerri, G. Gouardères, F. Paraguaçu. Ontological support for web courseware authoring, vol 2363 (Springer, Berlin Heidelberg, 2002), pp. 270–280View ArticleGoogle Scholar
  9. P. Aşkar, A. Altun, CogSkillnet: An ontology-based representation of cognitive skills. Educational Technology & Society 12(2), 240–253 (2009)Google Scholar
  10. P. Aşkar, K. Kalinyazgan, A. Altun, S.S. Pekince, in Handbook of Research on Instructional Systems and Technology. An ontology driven model for e-learning in K-12 education (2007), pp. 105–114Google Scholar
  11. Bhatti, N., Ismaili, I., & Dhomeja, L. (2017). Identifying and resolving disorientation in e-learning systems. 2017 international conference on communication, Computing and Digital Systems (C-CODE), (s. 169–174)Google Scholar
  12. S. Boyce, C. Pahl, Developing domain ontologies for course content. J. Educ. Technol. Soc. 10(3) (2007)Google Scholar
  13. De Bra, P., Houben, G.-J., & Wu, H. (1999). AHAM: a Dexter-based reference model for adaptive hypermedia. Paper presented at the Proceedings of the tenth ACM Conference on Hypertext and hypermedia: returning to our diverse roots: returning to our diverse rootsGoogle Scholar
  14. P.D. Bra, L. Calvi, AHA! An open adaptive hypermedia architecture. New Review of Hypermedia and Multimedia 4(1), 115–139 (1998)View ArticleGoogle Scholar
  15. J. Brase, W. Nejdl, in Handbook on ontologies. Ontologies and metadata for eLearning (Springer, Berlin Heidelberg, 2004), pp. 555–573View ArticleGoogle Scholar
  16. C. Brooks, G. McCalla, Towards flexible learning object metadata. International Journal of Continuing Engineering Education and Life Long Learning 16(1), 50–63 (2006)View ArticleGoogle Scholar
  17. Brown, J. S., Collins, A., & Newman, S. (1989). Cognitive apprenticeship: Teaching the crafts of reading, writing, and mathematics. Knowing, learning, and instruction: Essays in honor of Robert Glaser, 487Google Scholar
  18. P. Brusilovsky, Methods and techniques of adaptive hypermedia. User Model. User-Adap. Inter. 6(2–3), 87–129 (1996)View ArticleGoogle Scholar
  19. B. Cangöz, A. Altun, The effects of hypertext structure, presentation, and instruction types on perceived disorientation and recall performances. Contemp. Educ. Technol. 3(2) (2012)Google Scholar
  20. Castello, W., Gauthier, F., 2006, Sharing and Reusing Information on Web-Based Learning, International Workshop on Applications of Semantic Web Technologies for E-Learning.Google Scholar
  21. Çevik, V. (2012). Karmaşık Bilişsel Görev Performansında Çalışma Belleği Kapasitesinin ve Öğretimsel Stratejinin Rolü. (Unpublished Doctoral Dissertation), Hacettepe ÜniversitesiGoogle Scholar
  22. V. Chandra, J.J. Watters, Re-thinking physics teaching with web-based learning. Comput. Educ. 58(1), 631–640 (2012)View ArticleGoogle Scholar
  23. B. Chen, C.-Y. Lee, I. Tsai, in International Proceedings of Economics Development & Research, 18. Ontology-Based E-Learning System for Personalized Learning (2011)Google Scholar
  24. R.E. Clark, K. Howard, S. Early, in Handling complexity in learning environments: Theory and research. Motivational challenges experienced in highly complex learning environments (2006), pp. 27–41Google Scholar
  25. A. Collins, J.S. Brown, A. Holum, Cognitive apprenticeship: Making thinking visible. Am. Educ. 6(11), 38–46 (1991)Google Scholar
  26. J. Conklin, Hypertext: An introduction and survey. Computer 20(9), 17–41 (1987). https://doi.org/10.1109/mc.1987.1663693 View ArticleGoogle Scholar
  27. A.I. Cristea, What can the semantic web do for adaptive educational hypermedia? Educ. Technol. Soc. 7(4), 40–58 (2004)Google Scholar
  28. Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q., 319–340Google Scholar
  29. T. De Jong, Cognitive load theory, educational research, and instructional design: Some food for thought. Instr. Sci. 38(2), 105–134 (2010)View ArticleGoogle Scholar
  30. R. Denaux, V. Dimitrova, L. Aroyo, in Integrating open user modeling and learning content management for the semantic web, ed. by L. Ardissono, P. Brna, A. Mitrovic. User Modeling 2005. UM 2005. Lecture Notes in Computer Science, vol 3538 (Springer, Berlin, Heidelberg, 2005), pp. 9–18Google Scholar
  31. V.P. Dennen, in Handbook of Research on Educational Communications and Technology (2nd ed.). Cognitive apprenticeship in educational practice: Research on scaffolding, modeling, mentoring, and coaching as instructional strategies (Lawrence Erlbaum associates publishers, Mahwah, 2004), pp. 813–828Google Scholar
  32. V. Devedzic, Semantic web and education, vol 12 (Springer, USA, 2006) https://doi.org/10.1007/978-0-387-35417-0
  33. P. Dias, M.J. Gomes, A.P. Correia, Disorientation in hypermedia environments: Mechanisms to support navigation. J. Educ. Comput. Res. 20(2), 93–117 (1999)View ArticleGoogle Scholar
  34. D. Dicheva, L. Aroyo, An approach to interoperability of ontology-based educational repositories. International Journal of Continuing Engineering Education and Life Long Learning 16(1), 92–109 (2006)View ArticleGoogle Scholar
  35. M.D. Dickey, Integrating cognitive apprenticeship methods in a web-based educational technology course for P-12 teacher education. Comput. Educ. 51(2), 506–518 (2008)View ArticleGoogle Scholar
  36. Edwards, D. M., & Hardman, L. (1989). ‘Lost in Hyperspace’: Cognitive Mapping and Navigation in a Hypertext Environment. Hypertext: Theory into practice, 105Google Scholar
  37. Fok, A. W. (2006). Peonto-integration of multiple ontologies for personalized learning. Paper Presented at the Proceedings of the 5th IASTED International Conference on Web-Based Education (Puerto Vallarta, Mexico, 2006), ACTA PressGoogle Scholar
  38. M. Gaeta, F. Orciuoli, P. Ritrovato, Advanced ontology management system for personalised e-learning. Knowl.-Based Syst. 22(4), 292–301 (2009)View ArticleGoogle Scholar
  39. D. Gašević, M. Hatala, Ontology mappings to improve learning resource search. Br. J. Educ. Technol. 37(3), 375–389 (2006)View ArticleGoogle Scholar
  40. A. Ghefaili, Cognitive apprenticeship, technology, and the contextualization of learning environments. Journal of Educational Computing, Design & Online Learning 4(Winter), 1–27 (2003)Google Scholar
  41. N. Henze, P. Dolog, W. Nejdl, Reasoning and ontologies for personalized E-learning in the semantic web. Educ. Technol. Soc. 7(4), 82–97 (2004)Google Scholar
  42. P. Hsia, D. Kung, C. Sell, Software requirements and acceptance testing. Ann. Softw. Eng. 3(1), 291–317 (1997)View ArticleGoogle Scholar
  43. B. Johnson, L. Christensen, Educational research: Quantitative, qualitative, and mixed approaches (SAGE Publications, Thousand Oaks, 2008)Google Scholar
  44. S. Kalyuga, Instructional designs for the development of transferable knowledge and skills: A cognitive load perspective. Comput. Hum. Behav. 25(2), 332–338 (2009)View ArticleGoogle Scholar
  45. P. Karampiperis, D. Sampson, Adaptive learning resources sequencing in educational hypermedia systems. Educ. Technol. Soc. 8(4), 128–147 (2005)Google Scholar
  46. Kaya, G., & Altun, A. (2009). Development of apothegm ontology and visualization software for e-learning environments. Hacettepe Universitesi Egitim Fakultesi Dergisi-Hacettepe University Journal of Education(36), 136–147Google Scholar
  47. G. Kaya, A. Altun, in Metadata and Semantic Research. A learner model for learning object based personalized learning environments (Springer, Berlin, 2011), pp. 349–355View ArticleGoogle Scholar
  48. Kaya, G., & Altun, A. (2018). Educational ontology development. In M. Khosrow-Pour, D.B.A. (Ed.), Encyclopedia of Information Science and Technology, Fourth Edition (pp. 1441–1450). Hershey: IGI Global. https://doi.org/10.4018/978-1-5225-2255-3.ch124
  49. E.A. Khousa, Y. Atif, M.M. Masud, A social learning analytics approach to cognitive apprenticeship. Smart Learning Environments 2(1), 14 (2015). https://doi.org/10.1186/s40561-015-0021-z
  50. E. Knutov, P. De Bra, M. Pechenizkiy, AH 12 years later: A comprehensive survey of adaptive hypermedia methods and techniques. New Review of Hypermedia and Multimedia 15(1), 5–38 (2009)View ArticleGoogle Scholar
  51. R. Koper, Use of the semantic web to solve some basic problems in education: Increase flexible, distributed lifelong learning; decrease teacher's workload. J. Interact. Media Educ. 2004(1) (2004)View ArticleGoogle Scholar
  52. F.-R. Kuo, G.-J. Hwang, S.-C. Chen, S.Y. Chen, A cognitive apprenticeship approach to facilitating web-based collaborative problem solving. J. Educ. Technol. Soc. 15(4) (2012)Google Scholar
  53. M. Lama, J.C. Vidal, E. Otero-García, A. Bugarín, S. Barro, Semantic linking of learning object repositories to DBpedia. J. Educ. Technol. Soc. 15(4) (2012)Google Scholar
  54. M.J. Lee, Expanding hypertext: Does it address disorientation? Depends on individuals' adventurousness. J. Comput.-Mediat. Commun. 10(3), 00–00 (2005)MathSciNetView ArticleGoogle Scholar
  55. Y.-J. Lee, Empowering teachers to create educational software: A constructivist approach utilizing Etoys, pair programming and cognitive apprenticeship. Comput. Educ. 56(2), 527–538 (2011)View ArticleGoogle Scholar
  56. L. Lemnitzer, E. Mossel, K. Simov, P. Osenova, P. Monachesi, in Using a Domain-Ontology and Semantic Search in an E-Learning Environment Innovative Techniques in Instruction Technology, E-learning, E-assessment, and Education, ed. by M. Iskander. (Springer, Netherlands, 2008), pp. 279–284View ArticleGoogle Scholar
  57. S.-S. Liaw, Investigating students’ perceived satisfaction, behavioral intention, and effectiveness of e-learning: A case study of the blackboard system. Comput. Educ. 51(2), 864–873 (2008)View ArticleGoogle Scholar
  58. Lin, F.-R., & Kao, C.-M. (2018, 7 1). Mental effort detection using EEG data in E-learning contexts. Comput. Educ., 122, 63–79View ArticleGoogle Scholar
  59. T.-C. Liu, Web-based cognitive apprenticeship model for improving pre-service Teachers' performances and attitudes towards instructional planning: Design and field experiment. J. Educ. Technol. Soc. 8(2) (2005)Google Scholar
  60. Lytras, M., Tsilira, A., & Themistocleous, M. (2003). Towards the Semantic e-Learning: An Ontological Oriented Discussion of the New Research Agenda in e-LearningGoogle Scholar
  61. F. Manganello, C. Falsetti, L. Spalazzi, T. Leo, PKS: An ontology-based learning construct for lifelong learners. J. Educ. Technol. Soc. 16(1) (2013)Google Scholar
  62. Martins, A. C., Faria, L., De Carvalho, C. V., & Carrapatoso, E. (2008). User modeling in adaptive hypermedia educational systems. J. Educ. Technol. Soc., 11(1)Google Scholar
  63. S. McDonald, R.J. Stevenson, Disorientation in hypertext: The effects of three text structures on navigation performance. Appl. Ergon. 27(1), 61–68 (1996)View ArticleGoogle Scholar
  64. S. McDonald, R.J. Stevenson, Effects of text structure and prior knowledge of the learner on navigation in hypertext. Human Factors: The Journal of the Human Factors and Ergonomics Society 40(1), 18–27 (1998)View ArticleGoogle Scholar
  65. A. Mitrovic, V. Devedzic, A model of multitutor ontology-based learning environments. International Journal of Continuing Engineering Education and Life Long Learning 14(3), 229–245 (2004)View ArticleGoogle Scholar
  66. R. Mizoguchi, J. Bourdeau, Using ontological engineering to overcome common AI-ED problems. Int. J. Artif. Intell. Educ. 11(2), 107–121 (2000)Google Scholar
  67. Monachesi, P., Simov, K., Mossel, E., Osenova, P., Lemnitzer, L. (2008). What Ontologies Can Do for eLearning. Paper presented at the Proceedings of The Third International Conferences on Interactive Mobile and Computer Aided LearningGoogle Scholar
  68. Murray, S., Ryan, J., & Pahl, C. (2003). A tool-mediated cognitive apprenticeship approach for a computer engineering course. Paper presented at the Advanced Learning Technologies, 2003. Proceedings. The 3rd IEEE International Conference onGoogle Scholar
  69. G.J. Myers, C. Sandler, T. Badgett, The art of software testing (Wiley, USA, 2011)Google Scholar
  70. MySQL. (2013). MySQL Database, from https://www.mysql.com/
  71. Oprea, M. (2011). An Educational Ontology for Teaching University Courses. Paper presented at the Proceedings of the 6th International Conference on Virtual Learning–ICVLGoogle Scholar
  72. Ouf, S., Abd Ellatif, M., Salama, S., & Helmy, Y. (2017, 7 1). A proposed paradigm for smart learning environment based on semantic web. Comput. Hum. Behav., 72, 796–818.View ArticleGoogle Scholar
  73. F. Paas, J.E. Tuovinen, H. Tabbers, P.W. Van Gerven, Cognitive load measurement as a means to advance cognitive load theory. Educ. Psychol. 38(1), 63–71 (2003)View ArticleGoogle Scholar
  74. F. Paas, J.E. Tuovinen, J.J. Van Merrienboer, A.A. Darabi, A motivational perspective on the relation between mental effort and performance: Optimizing learner involvement in instruction. Educ. Technol. Res. Dev. 53(3), 25–34 (2005)View ArticleGoogle Scholar
  75. A. Papadimitriou, Architecture trends of adaptive educational hypermedia systems: The case of the MATHEMA. American Journal of Artificial Intelligence 1(1), 15–28 (2017)MathSciNetGoogle Scholar
  76. R. Patton, Software testing (Sams Pub, USA, 2005)Google Scholar
  77. S. Poehnl, F.X. Bogner, Cognitive load and alternative conceptions in learning genetics: Effects from provoking confusion. J. Educ. Res. 106(3), 183–196 (2013)View ArticleGoogle Scholar
  78. A. Poulovassilis, P. Selmer, P.T. Wood, Flexible querying of lifelong learner metadata. Learning Technologies, IEEE Transactions on 5(2), 117–129 (2012)View ArticleGoogle Scholar
  79. Revythi, A., & Tselios, N. (2017, 4 20). Extension of Technology Acceptance Model by Using System Usability Scale to Assess Behavioral Intention to Use e-LearningGoogle Scholar
  80. Rogers, E. M. (2003). Diffusion of innovations: New York: Free Press.Google Scholar
  81. G. Salomon, Television is "easy" and print is "tough": The differential investment of mental effort in learning as a function of perceptions and attributions. J. Educ. Psychol. 76(4), 647 (1984)View ArticleGoogle Scholar
  82. Y. San Chee, Cognitive apprenticeship and its application to the teaching of Smalltalk in a multimedia interactive learning environment. Instr. Sci. 23(1–3), 133–161 (1995)View ArticleGoogle Scholar
  83. N.M. Seel, K. Schenk, An evaluation report of multimedia environments as cognitive learning tools. Eval. Program Plann. 26(2), 215–224 (2003)View ArticleGoogle Scholar
  84. U. Shafrir, M. Etkind, E-learning for depth in the semantic web. Br. J. Educ. Technol. 37(3), 425–444 (2006)View ArticleGoogle Scholar
  85. Y.-C. Shih, P.-R. Huang, Y.-C. Hsu, S.Y. Chen, A complete understanding of disorientation problems in web-based learning. Turkish Online Journal of Educational Technology 11(3) (2012)Google Scholar
  86. B. Simon, P. Dolog, Z. Mikl, D. Olmedilla, M. Sintek, Conceptualising Smart Spaces for Learning. J. Interact. Media Educ. 4 (2004)Google Scholar
  87. S. Sosnovsky, D. Dicheva, Ontological technologies for user modelling. Int. J. Metadata Semant. Ontol. 5(1), 32–71 (2010)View ArticleGoogle Scholar
  88. L. Stojanovic, S. Staab, R. Studer, in Paper Presented at the World Conference on the WWW and the Internet (WebNet 01). eLearning based on the semantic web (Florida, Orlando, 2001)Google Scholar
  89. P.-C. Sun, R.J. Tsai, G. Finger, Y.-Y. Chen, D. Yeh, What drives a successful e-learning? An empirical investigation of the critical factors influencing learner satisfaction. Comput. Educ. 50(4), 1183–1202 (2008)View ArticleGoogle Scholar
  90. A.A. Tawfik, V. Law, X. Ge, W. Xing, K. Kim, in Computers in Human Behavior. The effect of sustained vs. faded scaffolding on students’ argumentation in ill-structured problem solving (2018)Google Scholar
  91. C.-Y. Tsai, B.M. Jack, T.-C. Huang, J.-T. Yang, Using the cognitive apprenticeship web-based argumentation system to improve argumentation instruction. J. Sci. Educ. Technol. 21(4), 476–486 (2012)View ArticleGoogle Scholar
  92. UML. (2013). Unified Modeling Language, from http://www.uml.org/
  93. J.J. Van Merriënboer, D. Sluijsmans, G. Corbalan, S. Kalyuga, F. Paas, C. Tattersall, Performance assessment and learning task selection in environments for complex learning. Advances in learning and instruction, 201–220 (2006)Google Scholar
  94. M. Vargas-Vera, M. Lytras, in Emerging Technologies and Information Systems for the Knowledge Society. Personalized learning using ontologies and semantic web technologies (Springer, Berlin, 2008), pp. 177–186Google Scholar
  95. Wan, S., & Niu, Z. (2018, 7 5). An E-learning Recommendation Approach Based on the Self-Organization of Learning Resource. Knowledge-Based SystemsGoogle Scholar
  96. Wang, C.-Y., Tsai, M.-J., & Tsai, C.-C. (2016, 9 1). Multimedia recipe reading: Predicting learning outcomes and diagnosing cooking interest using eye-tracking measures. Comput. Hum. Behav., 62, 9–18.View ArticleGoogle Scholar
  97. F.-K. Wang, C.J. Bonk, A design framework for electronic cognitive apprenticeship. JALN 5(2), 131–151 (2001)Google Scholar
  98. A. Yessad, C. Faron-Zucker, R. Dieng-Kuntz, M.T. Laskri, Ontology-based semantic relatedness for detecting the relevance of learning resources. Interact. Learn. Environ. 19(1), 63–80 (2011)View ArticleGoogle Scholar
  99. Zijlstra, F. R. H. (1993). Efficiency in work behaviour: A design approach for modern tools. (Doktora Tezi), Delft UniversityGoogle Scholar

Copyright

© The Author(s). 2018

Advertisement