- Open Access
An evaluation approach for smart support of teaching and learning processes
Smart Learning Environments volume 6, Article number: 2 (2019)
In the KOSMOS-2 project, a smart learning approach in form of a modern portal for students at Universities has been developed, called “MeinKosmos”. An important question from the point of view of universities and teachers is what the quality of such new tools are or what their benefits are. This article describes the evaluation approach defined in the KOSMOS-2 project for the “MeinKosmos” learning portal, and the results of using the approach in an undergraduate degree program. The focus of the evaluation is on the learning success of the students and the quality of the meta-search.
The projects KOSMOS and KOSMOS-2 (Borchardt et al. 2015),Footnote 1 which have been funded by the BMBF Germany between 2011 and 2017, generally focused on the idea how teaching and learning in a smart and digital world will change – especially from the perspective of lifelong learning. The notion of smart has to be clarified in this context: smart stands for different variations of supportive technology. In the narrow usage of the term, the smart learning is always related to infrastructure hidden in the environment, and which is ubiquitous and pervasive (like in smart city learning). In a broader usage of the term, smart mainly means “supportive, adaptive, intelligent” and thus is hidden behind a variety of developments in the field formerly known as ELearning, which is currently called digitally supported learning. Our approach is smart, as it supports students’ work in a hidden, non obtrusive manner, it is adaptive, and integrates new features from learning psychological backgrounds. This idea of smart learning environment (SLE) is based on the IASLE (International Association for Smart Learning Environments) definition and is related to Spector’s description of the field (Spector, 2014). “MeinKosmos” meets the main criteria of a SLE, as defined by Spector. This will be sketched in the summary.
The basic insights of the project “MeinKosmos” led to ideas of organizational changes required in classical University, of opening German university formats for non-typical students, and for technically adding to the quality of service and smartness in teaching and training. As one aspect of smart quality in teaching and training, the portal “MeinKosmos” has been developed as a prototype and tested with students, to develop insights about the requirements in knowledge management and instructional support in a digitalized learning world. The insights will, on the long run, lead to changes in content management systems, especially in learning management systems. Additionally, in the KOSMOS projects, different approaches of educational formats have been investigated, for different target groups (students, second level education, etc) and with different educational goals (e.g. full time study vs. vocational training). A very important part in the research work in the KOSMOS projects has been the funding of the insights of a good empirical basis. An important question from the point of view of universities and teachers is how “good” the new smart tools used in teaching are or what their use entails.
Moreover, the “MeinKosmos” technology is based on the insights from other related research in this field. Basics regarding learning and teaching have been found for example in (Ambrose et al. 2010). In another research project at our University, a SLE has been developed based on Liferay to use web 2.0 technology for education in computer science at vocational schools (see for example (Rott, Martens, 2014), (Martens and Hellmig, 2010) and (Lucke, Martens, 2010) as related work). In different projects, for example Juniorstudium and Wikilearnia (see (Wassmann, Tavangarian, 2014)), we have investigated the limits of software like StudIP and Moodle. No much work can be found in the field of smart learning management, so we have looked more into the field of adaptive technology and intelligent tutoring for the “MeinKosmos” approach.
The further structure of the article reflects these two evaluation perspectives: Evaluation of the meta-search section describes the approach to the evaluation of meta-search and presents results from their use. Evaluation of the learning success section then deals with the approach to the evaluation of learning success and presents experiences and corresponding results. The concluding Summary section summarizes the findings of both evaluation perspectives.
Evaluation of the meta-search
An important “smart” feature of the “MeinKosmos” portal is the meta-search, which is designed to help students from different backgrounds find relevant information and literature sources for their current study content. The search is carried out simultaneously in those information sources and literature databases, which are considered by the provider of the course as the most important for the respective study format. Thus, the students do not have to search sequentially in different sources and databases but the meta-search performs all relevant searchers at the same. Furthermore, the students need less knowledge about where to search and can use the hits to draw conclusions about the importance of the sources. More concrete, the meta-search is built upon the Wegtam search agent,Footnote 2 which offers integration of literature databases, library information systems, electronic journals and also Internet search engines (e.g. Google, Bing or Yahoo) into the same search interface. When configuring the meta-search for courses and study formats, the course or program owner may define which sources should be given priority when searching for literature in the course / study format. As a result of this configuration, the search agent will start searching in the priority sources first and rank the hits in these sources higher than in other sources. Each student logging into MeinKosmos is participant in a course or study format, which defines the configuration of the meta-search to be applied. If the student uses the meta-search, not only the priority sources defined by the course or study owner are taken into account but also the previous searches of the student and the individual priorities of the student when it comes to presentation of search results or tools used to store or process the hits.
The following questions were at the center of the evaluation of the meta-search:
Does digitally supported meta-search help students better find the relevant information than a conventional search?
How do the students perceive the functionality of the meta-search, i.e., are there any functions missing or are there any unnecessary functions?
The focus of the study was therefore primarily on the students’ point of view. The technical implementation of the meta-search, its performance indicators and the evaluation of the user interface under the aspect of applicability were not considered here. The following sections briefly describe the conception of the evaluation, its implementation and important findings.
The basic idea of the evaluation approach was to define tasks for information retrieval that can be performed both with meta-search and without meta-search. Considering the small sample size a mixed method design has been chosen with a focus on the qualitative part. Each task essentially involves a different kind of search, with the expected results known beforehand and defined by a model solution of the lecturer. These tasks are then carried out by one group of participants using meta- search in the portal (test group; N = 5) and by another group of participants without meta search (control group; N = 5).
Neither meta-search nor conventional search has been preceded by a specific training, which may also provide insight into whether training in meta-search is recommendable.
Since the “MeinKosmos” portal is in principle available and usable for different study formats and target groups, the approach of recruiting participants was to include different study formats and semesters.Footnote 3 Due to the heterogeneity achieved in this way, it is assumed that the experience background of the participants is also different.
The data collection during the execution of the tasks takes place individually for each participant in different ways:
The participants are observed during the task by a researcher who records his perceptions by taking notes. The focus of the observation is on the functionalities used, possible problems with the navigation in the meta-search or the applicability of the controls. In addition, for each task it is noted to what extent the expected information was found.
Each participant is asked to describe aloud what he or she is doing and why (“thinking aloud”). Another researcher records these statements of the participant on tape for later analysis. Furthermore, the researcher takes notes of what is said (keywords only) and what activities were performed.
After completion of the task, a questionnaire is used to summarize the participant’s background and above all his or her perception of the process, success, problems and positive aspects of the task.
The basic design described above is mainly based on experiences gained in a similar experiment for another meta-search by Lundqvist et al. (2009) and retested in a seminar paper. Attention should be paid to the qualitative character of the examination, which means that there is little ground to generalize the findings like in primarily quantitative evaluations.
The “thinking-aloud” method (Lewis 1982) also contributes to the qualitative character. It allows the collection of valuable information for smaller groups of participants not only in terms of the system used but also in terms of the assigned task and the possibly different approach of the participants in the task processing.
For the evaluation of the collected data, several steps were taken. The data recorded by observation, the data collected from “thinking aloud” and the students’ answers to the “free text” questions in the questionnaire were subject to qualitative content analysis (QCA). As a methodology basis for performing the QCA we used the work of Mayring (2000). More concrete, we applied the QCA approach of content summary. This approach attempts to reduce the collected data in such a way as to preserve the essential content and by abstraction to create a manageable corpus which still reflects the original material. For this the notes of the researchers and the text recorded were first paraphrased, generalized and reduced: a first reduction is achieved by removing paraphrases with the same meaning from the paraphrased text. A second reduction is the result of summarizing similar paraphrases. Afterwards, the reduced text was analyzed with respect to the research questions: for evaluating how the functionality of the portal was perceived, we looked for activities and events related to functionality of the portal and utterances connected to such activities/events (example: activity of “reading the list of query results from meta-search” and related utterance of “positive surprise that most relevant document really was listed on top”). For evaluating if the assistance by meta-search is better than conventional search, we compared the number and sequence of activities performed by test and control group, and the quality of the results. Furthermore, we also compared the utterances from test and control group.
A total of ten students took part in the evaluation, with two students each from the second, fourth and sixth semesters of BSc Business Information Systems, two others from MSc Business Information Systems Master of Business Informatics and two PhD students in Computer Science. One student each from the different Bachelor’s semesters, the Master and the PhD group used the portal while the other students did not. Each participant had three tasks to complete, with exactly five minutes available for each task. The participants worked individually and independently on the tasks and had no opportunity to get in touch with other participants. Before starting the task, every participant was asked about their experience with meta-search and conventional search engines. Beforehand, no information was given as to what the specific tasks would be. The participants only knew that the goal of the participation was to improve a software system, but not that it is a search or meta-search or what is being searched for.
Before working on the task, it was also explained briefly that the participants should “think aloud” and thus explaining, while executing the task, what and why they do something. All participants agreed that their statements were recorded.
The tasks were:
Please search in scientific publications for definitions for the “context-awareness” in computer science. We are looking for the document that is the source of origin of the definition.
Please search for definition for the term “business service” in computer science. We are looking for the document that is the source of origin of the definition.
Please search for query languages for process models. The task is to name three of them.
Thus, the first two tasks focused on finding specific publications, while the third was on exploratory search. Task 1 was designed in such a way that the set of possible original sources was completely known and one of these original sources was cited very often and probably could be found by the participants within the period. This also implied that the successful completion of the task was directly apparent to the observer. In task 2, however, many definitions exist, not only in informatics but also in other fields, so that quite different results could be expected here. Task 3 also required a step-by-step search, since query languages and process models also show results from different fields as a term combination.
After completing the tasks, participants were asked to complete a questionnaire. This questionnaire included a part that related to the study program, semester and experience of the participants. The second part concerned the participants’ assessment of how they felt about the assistance during the search, whether the need for information was satisfied and how the functionality was perceived. In the third part, assessments and comments on the user interface and user prompting were queried.
Five participants have used the meta-search in “MeinKosmos”, five participants have searched conventionally to work on the tasks, with two opted for Google as a primary search engine and three for Google Scholar. All five participants also used other search engines to complete the given tasks, which became clear in their “thinking aloud” utterances (see (Hofer, 2010)). None of the nine participants had any experience with a meta-search. By contrast, all participants stated that they already had experience in using search engines.
All participants were able to work on the two tasks and find relevant results. However, there were differences between the tasks and between the test and the control group as far as the completeness of the work and the scope of the results are concerned. In the search for the definition of “context-awareness” (task 1), both the participants from the test group and all participants in the control group had found the expected original document within the five-minute period. The longest search took with Google, since a branch to Google Scholar was necessary. Differences between test and control group existed only where the searched document was to be seen in the hit list. The participants of the test group were amazed that the document on the first place in the list of results really corresponded to the search. They said in the final survey that conventional search engines often show sponsored links in the upper places of the hit lists, the lack of which in the meta-search initially confused them.
In the search for the definition of “business service” (task 2), the participants were mostly busy during the time available to open and read the found definitions or the corresponding documents in order to make a selection according to their relevance. The participants could not really understand that the quality of the hits in the meta-search was significantly better. All participants from the test and control groups stated at the end of the five-minute period that they were not “finished properly”.
In task 3, only the Master student and the PhD student used the meta-search facility to capture intermediate results to perform the nested search. Both were thus faster to complete the task than the control group. There were no clear differences between the test group and the control group among the participants from the different undergraduate semesters. All Bachelor students only found two of the three requested languages. The observers had the impression that the users of the test group were closer to the solution than those in the control group at the end of the processing time.
In the test group, i.e., the users of the meta-search, almost all participants had to be made aware during the first task of the possibility that nested searches and pre-settings for the search are possible. The intended controls were not perceived and should therefore be possibly arranged in a different manner. After this hint, especially the students of the 6th Bachelor and Master semester and the PhD student made intensive use of the possibility of pre-settings. In addition to this observation, the analysis of the “thinking aloud” records and the results of the survey resulted in a number of statements in favor of the meta-search:
Which search engines exist at all became clear to the participants from the 2nd and 4th Bachelor semester only through the meta-search. Four out of five participants evaluated as positive that they could select between them and that several of them could be selected at the same time.
The presentation of the search results and, in particular, which search engine delivered results, was also considered positive.
The general layout of the user interface was judged positively, since here the request, result list, request history and meta-information are clearly separated.
Furthermore, there were also criticisms and suggestions for improvement. Among others, the following aspects were mentioned:
As already discussed above, the layout of the pre-set switching elements has been criticized (“hard to find”).
The meta-search process was criticized as being insufficiently transparent. This indicates less the lack of functionality than the need for training / instruction before using the meta-search.
A special “back” button was requested, in order to not undo inputs as a whole but rather step by step and to be able to jump back to the search results of previous questions.
The size of the meta-search window within “MeinKosmos” was considered too small.
With regard to the graphical design of the user interface, there were different and sometimes contradictory requirements, so that the assumption is obvious that rather different backgrounds and habits (Apple vs. Windows users) are the trigger than real shortcomings.
Against the background of the findings and observations described above, the answers to the two research questions can be summarized as follows:
Question 1: Does meta-search offer better assistance to the students to find relevant information than a conventional search?
The study does not provide clear support for stating that meta-search is better than conventional search. While there are a number of indicators, a final answer requires further investigation with presumably different tasks and more test persons.
Question 2: How do the students perceive the functionality of the portal, that is, are there any functions missing or are there any unnecessary functions provided?
The students have come up with a number of suggestions for changing functionality, which is essentially an improvement in comfort and handling. All essential functions are present and well applicable.
In addition, the impression was created that students should be instructed in the use of meta-search in order to make the most of their functionality. This could also be followed by a comparative study between a group without instruction and a group with instruction.
Evaluation of the learning success
The central theme of the evaluation approach dealt with in this section is to find out what influence the use of IT-based teaching and learning systems has on the learning success of students. In the sense of Bortz and Döring (2006), this is an exploratory method in which the intervention method “IT tool” is examined for its efficiency. The basic idea is to compare the learning success with a specially developed IT tool in use, i.e. with “MeinKosmos”, and without using this IT tool. “MeinKosmos” was expected to have advantages over other IT tools as it was developed specifically to meet the requirements of the new study formats and target audiences (see also Evaluation of the meta-search section). For the evaluation, a control group design with several measurement times was used (see Implementation of the evaluation section for the timeline). In the test and control group, the extent to which the students’ learning outcomes coincide with the learning outcomes expected by the lecturer is considered. The module “Computer-aided Scientific Work (RGWA)” of the B.Sc. Business Informatics provided the case for the evaluation approach which is presented in the following.
Evaluation concept in the module RGWA
The module RGWA in B.Sc. Business Informatics provides basic methodical and practical knowledge in scientific work to students of the 4th semester, including, among other things, the assessment of the quality of scientific publications. The learning outcome with regard to this topic forms the base for the evaluation. The corresponding task that the students have to work on during the semester is: How should the assessment of the quality of a scientific publication be made?
All students get the same job. The cohort of the students is divided into a control group and a test group. While the test group works with the portal “MeinKosmos”, the control group works on the same task and the same learning control questions without using the portal and only using the IT tools used so far in the study. None of the students has previous experience with “MeinKosmos”.
In order to measure the quality of task processing, there are three measurement time points: 1) after setting the task prior to the lecture, 2) after a lecture and an exercise, 3) after completion of the entire lectures on the subject (e.g., module). The students of both the test and the control group answer learning control questions at the respective measuring times.
The teacher develops the learning control questions from the content of the lecture. He also develops the answers students should ideally give to these learning control questions. However, these are only used for the study and are not available to the students. The learning control questions are answered by all participants at the beginning of the module, that is, without the contents of the course known or already taught to the participants (measurement point 1), and after the imparting of the appropriate content, so after the corresponding lectures and a practical exercise (measurement point 2). Thus, comparison can be made between these two measurement times and between test and control group. For the third, the learning control questions are answered by all participants at the end of the module (measurement point 3).
At this point of time, the participants also used the mediated content for their own scientific work. In addition, a final survey of the participants takes place in order to be able to determine any disruptive factors.
The research questions for the evaluation are:
Do students who use the “MeinKosmos” learning platform show an evaluation that is more in line with the lecturer’s requirements, which is important for the quality of scientific publications than students who do not use the learning platform?
Do students who use the “MeinKosmos” learning platform show more congruent learning outcomes with the expected learning outcomes of the lecturer than students who do not use the “MeinKosmos” learning platform?
All data to be collected will be anonymized for processing, so that neither conclusions on persons nor conclusions on the assessors or their assignment to the test or control groups are possible.
Implementation of the evaluation
In the summer semester 2016, 14 students took part in the module RGWA, which were divided into the test group or control group based on their chosen field of studyFootnote 4 “Information Systems” or “Business Informatics”. The test group worked using “MeinKosmos”, the control group used the learning platform Stud.IP and the literature search of the library or the search engines available on the internet. The test and control groups each comprised seven participants. The gender of the participants was considered irrelevant to the study. The participants are comparable in age, so that no distortion due to age differences was to be expected.
The learning control questions developed by the lecturer were as follows:
What is the structure of a scientific article or what is important content?
Name sources for obtaining scientific articles.
On the basis of which criteria can the quality of a scientific article be assessed before reading?
How can research results (theories, novel algorithms, procedural instructions, ...) be checked for their validity or applicability?
These learning control questions were answered by all participants at three measurement time points (MTP):
MTP 1: At the beginning of the summer semester with the first lecture in calendar week 14.
MTP 2: After the independent handling of the topics covered in the learning control questions, using the platform “MeinKosmos”, in calendar week 19
MTP 3: After the evaluation of the contents of the lectures for their own work by the students in calendar week 27.
During the data collection, a personal code was used for each participant, which allows the unambiguous assignment of the data anonymously. This code is generated from “MeinKosmos” data and noted on each learning progress check questionnaire. The assignment of the questionnaires to the participants can only be done by a person not involved in the evaluation, who knows the codes.
For each learning control question, the lecturer developed a model solution consisting of a list of important concepts and their interrelationships. The students’ responses were evaluated with the same general approach inspired by QCA which already was discussed in Conception section: the student responses were paraphrased, generalized and reduced. The first reduction was achieved by removing paraphrases with the same meaning from the paraphrased text. The second reduction is the result of summarizing similar paraphrases. As the same concepts and contexts can be expressed in the student’s answer to the learning control questions with different terms (equivalent terms to be used, plus synonyms, upper / lower terms and variations), the reduction had to be accompanied by coding the individual answers. In this step, the concepts from the lecturer’s model solution were used as a means of coding the individual answers. This allows a qualitative classification of the individual answers of the students and a quantitative view of all answers. The coding of the individual answers was carried out by two scientists respectively. The two codings and the records of the observers were then compared and disaggregated and resolved in the case of discrepancies.
Table 1 shows a selection of the concepts for the individual learning control questions:
On the one hand, the evaluation of the responses to the first measurement time point (MTP) showed no differences between the test and the control group and, on the other hand, that the participants were only able to name a few concepts, which means that the level of knowledge - as expected - is not particularly high.
Measurement times 2 and 3 are therefore used to compare the test group and the control group. For comparison, the focus will be on a single metric: the average coverage of the concepts in the learners’ answers. To calculate this indicator, the answers to the learning control questions were evaluated individually for each participant at each measurement time, and a point was assigned for the matching use of the term on the basis of the model solution. The individual points were aggregated for test and control group. Table 2 shows the percentage coverage with the ideal responses from the model solution. The ratio of the number of concepts included in the answers to the total number of concepts was then calculated. So, as with an exam, a point was awarded for the correct naming of a concept. From this a score for the filling person was formed, which in turn served as basis for the calculation of the score for the respective group. The percentage represents the percentage of correct answers for the group and thus reflects the simplified assumption that all concepts are equally important. Then, for each learning control question, the average was formed for all participants in the test and control groups.
Table 2 shows this figure in the comparison between control and test group in the two measurement times 2 and 3. From Table 2 it can be seen that in both groups the expected learning effect exists and that after the corresponding lectures, the vast majority of important concepts is well-known and can be named. Also, after discussing the lecture content with the lecturers and after applying this content in their own work, the understanding of the concepts has deepened significantly. The comparison of the two groups shows no clear advantage of the test or the control group, which may have different reasons, which are discussed in Findings Section.
Tables 3 and 4 show the development of the indicators within the test and within the control group. It should be noted that, on the one hand, the practical application of the concepts as well as a longer period of time in which students have dealt with the concepts seem to contribute to higher learning success. On the other hand, the maximum values achieved are still far from the theoretical maximum of 100%, which may either stimulate improvements in the lecture or lead to a review of the suitability of the learning control questions or the evaluation of the answers.
The findings discussed below can be divided into results concerning the evaluation instrument and results with regard to the research questions on the portal use as defined above.
The evaluation tool, as presented in the previous section, and the evaluation process have proven to be suitable in principle. Experiences have been gained on which adjustments are recommended for follow-up evaluations. This implies that the coding (see Implementation Section) of the answers to the learning control questions should be supplemented with an interpretation of the answers so that this interpretation could determine whether the students’ understanding may be correct but the expected terms have not been used. For example, an answer to the first learning control question (“structure of a scientific subject article”), referred to the “scientific nature of the article”, which is correct in terms of content but conceptually inaccurate. This at least content-wise correct tendency of the answer makes it possible to delimit the quality of the answer from completely wrong or missing answers, which is an important difference with regard to the question of the learning level.
Another finding is that the connection between the model solution and the lecture content should be clearer, i.e., for all learning control questions, the expected concepts should be fully and uniquely assigned to the material provided. Supplementary concepts conveyed in the courses but not visible in the material should be avoided as there is a potential for concepts not being conveyed as the teacher remembers them.
When comparing the student groups, only slight differences were found between the “MeinKosmos” group and the control group. A simple explanation for this would be that there are no differences, so the portal offers no benefits. But there is a number of other explanations, such as:
the small number of participants in the study and the associated small amount of data as well as a resulting lack of robustness when exchanging individual data records,
the low proportion of collaborative, geographically distributed work, which is the actual focus of the portal functionality, in the considered module RGWA or
technical problems with the operation of the portal, which could have distorted the results.
The defined research questions cannot yet be answered by the study carried out, but will be the subject of future work. The developed evaluation instrument is therefore the central result of this work.
However, it is important to note that the instruments traditionally used (Stud.IP, Internet) were familiar to students at least four semesters at the time of the examination, but that “MeinKosmos” was used for the first time. This was not determined in the study, but should be considered as a variable in subsequent study. As a minimalist requirement for a new IT tool, the claim could be formulated that it can be used at least as well as traditionally used, well-known tools.
From the perspective of smart learning environments (SLE), the approach “MeinKosmos” fulfills the requirements of the field, which are coined by Spector in (Spector, 2014). As necessary requirements for SLE, Spector listed effectiveness, efficiency, scalability, and autonomy. As highly desirable, there are aspects listed like: engaging, flexible, adaptive and personalized. There are additional aspects lited under the term likely, but they are not realized in “MeinKosmos”, so they don’t apply here. The smart learning environment “MeinKosmos” is effective, as the learning outcome has been proven. Compared to technologically old-fashioned content management, the students in “MeinKosmos” performed more effective – however, this has to be supported by a larger scale evaluation and also has to be proven on a longer phase of testing. The tool is cost effective, as it does not lead to higher cost than the traditional portals or content systems. It only requires more information about the students, which might led to problems regarding data security, on the long run. There, solutions have to be found. The approach is scalable as it is not applied to one field, and the approach could potentially also be used in other educational settings, e.g. at school. The approach is autonomous, as “MeinKosmos” is able to analyze the learner on its own. However, the quality of the related instructional design still remains at the teachers. One of the goals of “MeinKosmos” is to be engaging – but our evaluation has not had a focus in proving this, so this aspect has to be shown. The system is flexible, e.g. it is neither restricted to a set of users, nor to certain content. It is adaptive by nature, and it is personalized, as each student gets his/her own profile. In a sum, regarding Spector’s criteria, “MeinKosmos” can be called a smart learning environment.
The evaluation approach for the IT support of teaching / learning processes was the central topic of this work, whereas the emphasis was on the teaching / learning portal “MeinKosmos” as an IT tool and the learning success or the quality of the meta-search as evaluation perspectives.
The evaluation regarding the learning success took place in the course “Computer-aided scientific work” in the summer semester 2016 and had the goal to carry out a pre-test for the testing of the evaluation instruments as well as the procedure. This should also create a data base that could later serve as a comparison with continuing education courses. A control group design was used, that is, one group used “MeinKosmos” but the control group did not. In both groups, a learning survey was conducted at the beginning of the module and after the teaching units were completed. The results of “MeinKosmos” and the control group were compared. Overall, this evaluation design has proven to be suitable. However, no clear differences were found between the two groups. Due to the small sample size and the short-term treatment, generalizations of our findings are to be taken with care.
Regarding the applicability of the meta-search, the information requirements of various user groups were determined and a control group design was also used. The collection of the data in the two groups takes place via observation, evaluation of the results of a given task and questioning after the completion of the task processing. Again, it has been shown that the evaluation design as such is in principle suitable. However, adjustments should be made in the tasks assigned and possibly a briefing of the participants in the search. In addition, the meta-search study provided a number of user interface enhancement and complementary functionality suggestions.
See https://www.wegtam.com/ for more information about Wegtam and the search agent (last visited 17.09.2018)
More information about study formats and semesters are provided in Implementation section.
Since the choice of the field of study is only made in the 4th semester, i.e., immediately before the beginning of the module RGWA, and until then the education of all students is the same, it cannot be assumed that the field of study has an influence on the evaluation result.
Ambrose, Susan A., Bridges, Michael W., DiPietro, Michele, Lovett, Marsha C., Norman, Marie K. (2010): How Learning Works: Seven Research-Based Principles for Smart Teaching. Jossey-Bass – a Wiley Imprint
Borchardt, U.; Sandkuhl, K.; Stamer, D. (2015): MeinKosmos – Konzept zur Realisierung des KOSMOS-Portals zur medialen Unterstützung, in: Freytag-Loringhoven VK; Göbel S: Öffnung der Hochschule durch Wissenschaftliche Weiterbildung. Werkstattberichte aus dem Projekt KOSMOS der Universität Rostock. München, Akademische Verlagsgemeinschaft München, 233–252
J. Bortz, N. Döring, Forschungsmethoden und Evaluation (Springer, Berlin u. a, 2006)
W. DeLone, E. McLean, Information system success: The quest for the dependent variable. Inf. Syst. Res. 3(1), 60–95 (1992)
Hofer, Barbara (2010), Epistemological Understanding as a Metacognitive Process: Thinking Aloud During Online Searching. Journal Educational Psychologist, Volume 39, 2004, Issue 1, pages 43–55, published online in 2010 at https://doi.org/10.1207/s15326985ep3901_5
R.S. Kaplan, D.P. Norton, The balanced scorecard: translating strategy into action (Brighton, Harvard Business Press, 1996)
C. Lewis, Using the “thinking-aloud” method in cognitive interface design, Volume 9265 of research report RC. International Business Machines Corporation (Research Division. IBM T.J. Watson Research Center, Yorktown Heights, 1982)
J.R. Lewis, in International journal of human-computer interaction 13, Seiten 343–349. Introduction: Current Issues in Usability Evaluation (2001)
U. Lucke, A. Martens, in Proc. of the 5th International Workshop on Applications of Semantic Technologies at the Informatik Conference. Utilization of semantic networks for education: On the enhancement of existing learning objects with topic maps in ML3 (2010)
M. Lundqvist, V. Mazalov, K. Sandkuhl, V. Vdovitsyn, E. Ivashko, Do digital libraries satisfy users’ information demand? Findings from an empirical study. XI all-Russian research conference RCDL’2009 (Russische Akademie der Wissenschaften, Petrozavodsk, 2009), pp. 167–173
M. Lundqvist, K. Sandkuhl, U. Seigerroth, Modelling information demand in an Enterprise context: Method, notation and lessons learned. International Journal Systems Modeling and Design 2(3), 74–96 (2011)
A. Martens, L. Hellmig, S. Bader, in 3rd Workshop on Pervasive Education at the UbiComp International Conference for Ubiquitous Computing, Copenhagen, Denmark. Where to put pervasion in education? (2010)
J. Mooney, V. Gurbaxani, K. Kraemer, in Proceedings of the 16th international conference on information systems. A Process Oriented Framework for Assessing the business value of information technology (1995), pp. 17–27
A. Rott, A. Martens, in Lernen im Web 2.0 -- Erfahrungen aus Berufsbildung und Studium. Bundesministerium für Berufsbildung. Methodenkatalog für den Einsatz von Web 2.0 im Unterricht (2014)
J.M. Spector, Conceptualizing the emerging field of smart learning environments. Journal smart learning environments, Springer Open Yournal 2014(1), 2 (2014)
R. Stockmann, W. Meyer, Evaluation. Eine Einführung (Barbara Budrich, Leverkusen, 2010)
I. Waßmann, D. Tavangarian, Combining Educational Environments. Key methodology to successful competence based Learning (2014), pp. 63–67
We like to thank all the students who participated in the project, who tried and discussed our approach. We thank the tutors and teachers, and our colleagues, who accompanied the project, and the team of the “Wissenschaftliche Weiterbildung”.
The project has been funded by the Bundesministerium für Bildung und Forschung (BMBF) – Federal Ministry of Education and Research in to phases. Phase 1 KOSMOS was between 2011 and 2015, Phase 2 Kosmos was between 2015 and 2017. More information is available at https://www.uni-rostock.de/weiterbildung/projekte/projekt-kosmos/
Availability of data and materials
Data and material related to this investigation are available at the Institute of Computer Science, at the research group of either Prof. Martens and/or Prof. Sandkuhl.
There are no competing interests known to the corresponding author.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Martens, A., Sandkuhl, K., Lantow, B. et al. An evaluation approach for smart support of teaching and learning processes. Smart Learn. Environ. 6, 2 (2019). https://doi.org/10.1186/s40561-018-0081-y
- Lifelong learning