Skip to main content

Measuring learning that is hard to measure: using the PECSL model to evaluate implicit smart learning

Abstract

This paper explores potential ways of evaluating the implicit learning that may be present in autonomous smart learning activities and environments, reflecting on prior phenomenographic research into smart learning activities positioned as local journeys in urban connected public spaces. Implicit learning is considered as intrinsic motivation, value and richer engagement by participants, demonstrating levels of experience complexity, interpreted as levels of implicit learning. The paper reflects on ideas for evaluating implicit smart learning through planning for experience complexity in the context of a pedagogical model, the Pedagogy of Experience Complexity for Smart Learning (PECSL), developed from the research. By supplementing this model with further conceptual mechanisms to describe experience complexity as surface to deep learning alongside cognitive domain taxonomy equivalences, implicit smart learning might be evaluated in broad flexible ways to support the design of more effective and engaging activities. Examples are outlined placing emphasis on learner generated content, learner-directed creative learning and supporting dialogue and reflection, attempting to illustrate how implicit learning might manifest and be evaluated.

Introduction

This paper is about implicit smart learning, and reflecting on conceptual mechanisms by which to describe the complexity of possible learning within an activity, and if need be, to evaluate it. These reflections arise from implications of research into smart learning journey activities, using the methodology of phenomenography to investigate how participants experience taking part.

The reader should note this paper does not report findings of research, rather these are further reflections arising from the prior research. It is important to clarify that neither the discussion or research referred to in this paper is concerned with assessing learning related to formal specified learning outcomes. The activities that were researched were autonomous general topic activities in heritage, culture and politics, and participants took part during their own time, without any formal learning aims or assessment being involved. Learning was considered from the perspective of the learners themselves (Badie, 2018; Roisko, 2007), within a summary definition from Liu et al. (2017b) of learning to learn, to do and for self-realisation.

In order to set the scene, the introductory sections of this paper attempt to guide the reader through four related aspects: implicit smart learning, implicit learning in general, what phenomenographic experience variation is, and how this contributes to further reflection on possible ways to describe and evaluate implicit smart learning.

Implicit learning in smart learning cities

The focus of this paper considers smart learning as learning that may be possible in autonomous, citizen orientated, ‘smart technology’ enhanced activities situated in public space real-world locations. Much of this learning might be that which is considered as implicit—suggested though not directly expressed (Implicit, 2020). Placing context distinctly within smart learning literature, Liu et al. provide helpful precedent by defining explicit and implicit smart learning in ‘Characteristics and Framework of Smart Learning’ (2017a, pp. 38–43). They declare that a “smart learning system includes two aspects: school smart learning system and social smart learning system” (p. 38). Defining explicit learning as ‘what people normally think of as formal learning’ that often happens in school, and implicit learning as social learning, often happening ‘in an environment of community learning, enterprise (work) learning, and learning in public places’. They additionally note that the implicit learning happening at home or in social situations may not always be thought of as learning (2017a, p. 38). Liu et al.’s descriptions offer a relevant foundation for considering the learning referred to in this paper, and the difference between explicit and implicit learning within contexts of smart learning environments.

Social learning activities in public places are the focus of this paper, and have ‘manifold opportunities for learning’ within the concepts, ideas, topics, technologies and interactions of smart learning environments, further described in some depth in Lister (2021b). Activities “can be scoped and designed for a wide range of purposes, and learning can play a part either as an explicit aim, or as an implicit or even covert goal (Lister, 2020)”, (Lister, 2021b, p. 2). For example, some activities would have learning as a specified purpose, while other activities may not be explicitly about learning, but rather about citizen engagement, feedback gathering or creative discovery and content creation (for fun or creative pursuit). Some gamified activities may have learning as a hidden or covert aim, similar to that indicated in Fang (2013), or work from Gee (2007) or Prensky (2003), cited in Hense and Mandl (2014). In these kinds of smart learning activities, learning is present in more general terms of expanding communication skills, self-agency, broadening cultural understanding, and advancing digital skills of participant learners (Lister, 2020, 2021b). This is similar to how Hense and Mandl summarise Gee or Prensky’s argument in relation to digital learning games (2014, p. 182).

Quoting the 16th party congress of the Chinese Communist PartyFootnote 1 report, Liu et al. (2017a) remark “to ‘form a learning society in which all people learn and pursue life-long learning, which in turn facilitates people’s all-round development … it is an important path to realize the visualization of learning society… to highlight the role of implicit learning…” (2017a, p. 43). This shared aim with the UNESCO Sustainable Development Goal 4 (SDG4Footnote 2), echoes an understanding of smart cities to emphasise the importance of enhancing citizen quality of life (e.g. De Lange & De Waal, 2017), and civic learning (Sacré & De Visscher, 2017). Carroll et al. highlight the challenge of “enhancing awareness, engagement, and interaction pertaining to individual and collective human experiences, meaning making, activity, intentions, and values” (2017, p. 2). These challenges relate well to ideas about smart learning activities in public spaces, with concepts such as writing the smart city (Jordon, 2015; Sacré et al., 2017) ambient literature (Koehler, 2013), pyschogeography concepts for smart learning (after ‘Dérive’, Debord, 1958), e.g. Pinder (2005), Kazil and Hou je Bek (2010) and Hou je Bek (2002), or community arts projects utilising smart technology such as Wood Street WallsFootnote 3 (Lister, 2021b). These activities embrace Sacré and De Visscher’s (2017) “cultural understanding of civic learning” that focuses on “citizens’ assemblage of the social, the material and the symbolic, as a kind of wayfinding in society” and complement Vinod Kumar’s “paradigms of well-being” for smart learning environments, of relationships with self and community, understanding self more clearly, seeing ourselves in others, and self-realisation (2020, p. 43). The authors own research (e.g. Lister, 2021a) into smart learning activities conceptualised as real-world journeys are part of these kinds of activities, considered to potentially enhance citizen community life and understand more about the motivation and engagement of learner-participants (Lister, 2020, 2021c). Dron’s “complex conversational process that can and usually does lead to much that is of value beyond what is planned” (Dron, 2018, p. 3) and Siemens’ ideas about designing environments in which motivated learners acquire what they need (2006, p. 119) closely align with concepts of social implicit smart learning.

Social implicit learning in smart learning environments is not reliant on large-scale data driven technical infrastructures, embedded beacons, sensors or personalised notifications (e.g. in Uskov et al., 2016, cited in Badie, 2018). It does not require complex (some would say intrusive) personal learning ontologies (Rezgui et al. 2014), it is simply available in anyone’s phone at any time. An availability of WiFi and smartphone apps to provide context-aware content delivery via geo-tagged coordinates or augmented reality triggers, to create or provide maps, or perhaps to use other apps in smart ways such as indicated above could all be described as forming smart learning environments. This is an interpretation of the 'smart enough cities’ of Green (2019), where individual smartphone adaptive technology conjures potential ad-hoc smart learning experiences as and when a person requires and interacts.

Implicit learning

In writing about virtual reality implicit learning, Slater (2017) provides helpful historic context, citing Reber (1989) to define implicit learning as “the process whereby individuals learn complex information unconsciously and gain abstract knowledge” (Slater, 2017, p. 24). Reber cites his own much earlier work, stating “(s)ome two decades ago the term implicit learning was first used to characterize how one develops intuitive knowledge about the underlying structure of a complex stimulus environment…” and that “… implicit acquisition of complex knowledge is taken as a foundation process for the development of abstract, tacit knowledge of all kinds” (1989, p. 219). Kaufman et al. (2010), citing a body of other work, assert that implicit learning “takes place on a daily basis without our intent or conscious awareness”, and “plays a significant role in structuring our skills, perceptions, and behavior” (2010, p. 321). Eraut (2000) compares types of learning utilising a ‘Time of Stimulus’ typology of informal learning. Categories of ‘Past episode(s)’, ‘Current experience’ and ‘Future behaviour’ define implicit learning as ‘implicit linkage of past memories with current experience’, ‘selection from experience enters the memory’, and ‘unconscious effect of previous experiences’ (Eraut (2000, p. 13). Slater (2017) additionally refers to Seger (1994) for further defining characteristics of implicit learning (2017, p. 25, from Seger, 1994, p. 164), summarising these later as “what is learned is non-conscious, it is complex, it is not as a result of hypothesis testing, and it is not based on episodic memory” (p. 29). Though Slater’s (2017) work differs from smart learning activities in that it discusses virtual reality embodiment, there are similarities in the sense that “(t)here is clearly no hypothesis testing or deliberate attempt to learn something based on episodic memory. People simply have an experience” (2017, p. 30). It is this experience that is of most interest to the work discussed in this paper and others by the author, to understand more about how self-reported participant experiences can contribute to planning for learning in smart learning environments.

In the social smart learning context of this paper, the challenge of defining implicit learning is that the term ‘implicit' can describe many kinds of incidental learning occurring in a wide range of learning contexts. For example, in the unplanned but related aspects of learning in formal education as well as the haphazard citizen learning of everyday life. Implicit learning can be consciously intentional, or much less consciously aware unintended choices that catch the attention or motivated interest of the learner. Intertwined variations of implicit learning can be the billions of learning journeys that start every day with a Google search (Dron, 2018) or the implicit learning that "takes place on a daily basis without our intent or conscious awareness” (Kaufman et al., 2010).

Implicit learning and experience variation

It is useful to further explore implicit learning in smart learning environments in relation to participant experience, and experience variation as understood in the context of the methodology of phenomenography. Phenomenography is a qualitative methodology that investigates learner (participant) experience, and uses responsive emergent interviews to discover a range of possible ways of experiencing a phenomenon. Phenomenography analyses interviews at collective level, though individual context is retained, to establish what is known as an outcome space consisting of categories of description (CoD) for how phenomena are experienced (e.g. Marton & Pong, 2005; Reed, 2006, p. 8). Categories of description attempt to discover the range of relational and often hierarchical categories of experience in commonality and then variation within categories. Initially a methodology specifically orientated toward formal classroom type learning (Marton & Säljö, 1976 in Svensson, 1997), it has expanded to include other fields, such as learning with technology (Souleles et al., 2014) and user experience of digital applications (Kaapu & Tiainen, 2010). Phenomenography is a non-dualist (Reed, 2006), second order analysis perspective (Sjöström & Dahlgren, 2002, p. 340), attempting to see from the perspective of the interviewees themselves by only looking at manifest transcript content, rather than make any latent assumptions about why interviewees say things, described by Bowden as “if it is not in the transcript, then it is not evidence” (Bowden, 2005, p. 15). Phenomenography is novel within smart learning research, though has been referred to in relevant contexts, for example in Badie (2018) for understanding conceptions of learning (e.g. Säljö, 1979a, 1979b), and “to put ourselves into the learners’ shoes and observe the phenomenon of ‘learning’ from their perspective” (Badie, 2018, p. 394).

From this self-reported participant perspective, implicit learning can perhaps benefit from phenomenographic approaches to investigating it because implicit learning is that which is (sometimes unconsciously) selected by the learner themselves, not based on an intended set of learning outcomes. This can be defined as a learner’s ‘object of vital interest’, as described by Greeno and Engestrom (2014), in response to the phenomenographic concept of three aspects of an ‘object of learning’. These are (i) the intended object: what should be learned, (ii) the enacted object: what is possible to learn, and (iii) the lived object: what is actually learned (Marton et al., 2004). Greeno and Engestrom asserted that the “intended object is depicted as a monopoly of the instructor. However, learners also have intentions…” (2014, p. 133), emphasising the issue of learner agency. The intended object of learning—a fact, skill or aspect of knowledge that can be further applied—would under usual circumstances be assessed with formal assessment criteria. Phenomenographers subsequently developed Variation Theory (VT) to support pedagogical approaches for this kind of intended learning, and assumes there are ‘critical aspects’ of an intended object of learning, so as to learn and apply it correctly (Orgill, 2012). However, this is not relevant in the context of this paper, as implicit smart learning is motivated by learners’ objects of vital interest, and is the domain of the learner (activity participant), not the tutor. Reflecting on ways to evaluate this kind of implicit smart learning to perhaps enhance design of more engaging and effective activities is therefore worth further consideration.

Learning evaluation frameworks are plentiful, with an overload of possible choices available in the literature. Social constructivist concepts dominate (Kivunja, 2014), rooted in sequential instructional design, explicit learning outcomes, and ‘reward and punishment’ (e.g. Bruner, 1966). Kivunja makes a convincing argument for the need of new learning paradigms based in twenty-first century skills, listing amongst others communications; creativity and innovation; initiative and self-direction; flexibility and adaptability, and social and cross-cultural interaction. These skills are all part of what this paper touches upon in ideas about evaluating flexible learning based in the concept of phenomenographic experience variation, as it might relate to smart social implicit learning. The author sets out a number of learning evaluation possibilities in relation to phenomenography, which assumes an interpretive non-dualist paradigm (e.g. Reed, 2006) of the learner's person-world relationship in a context of an intersubjective co-constitutive lifeworld (Sandberg, 2005, p. 56). This attempts to highlight potential significance of the learner's lifeworld of lived experience, and its impact on learning in the ad-hoc messy setting of citizen smart social learning activities such as those described in this paper. What learners want to learn, often without thinking about it as learning, is what is under discussion. The ideas and possible choices are offered from this perspective, attempting to plan for and evaluate implicit learning utilising the range of experience complexity that a phenomenographic study can uncover.

Research context

This paper discusses previously published work about research carried out into smart learning activities conceptualised as journeys (Lister, 2021a, 2021b), and what learners think they might be learning (Lister, 2021c). These prior publications reflect on the key pedagogical conclusions and considerations for planning and designing ad-hoc smart learning activities, with some early commentary on concepts for evaluating this learning. For sake of space here and to avoid repetition, a brief summary of the research is offered, to provide the reader with context for the reflections in this paper. This paper does not attempt to report ‘findings’ per se, but rather to follow up on early thoughts that arose from the research itself.

Provided here for convenience are Table 1 (Lister, 2021a) and Table 2 (Lister, 2021b). Table 1 shows four categories of description (CoD) with four levels of complexity for ‘experiencing a smart learning journey’, these being derived from participant interviews subsequently analysed with a phenomenographic structure of awareness approach (Cope, 2004), further discussed in Lister (2021a, 2021d). These experience categories and levels were then summarised as experience ‘relevance structures’ and informed related pedagogical approaches, shown in Table 2 (Table 2, Lister, 2021b). Further acknowledging the hinterland of unseen factors that mitigate participating in such activities and the theoretical backdrop in which they are situated, a four-tier model of pedagogical considerations, ‘The Pedagogy Of Experience Complexity For Smart Learning’ (PECSL) was developed (Lister, 2021a).

Table 1 The experience complexity of a smart learning journey (Lister, 2021a)
Table 2 Summary of experience relevance structures and related pedagogies in a smart learning journey (Lister, 2021b)

Planning for experience complexity

The PECSL is considered as a thinking and planning model of design considerations, inspired by user experience and user centred design, for example in Garrett (2010). Applying this model to the design of smart learning activities would be an iterative cyclical process (after Gibbons, 2016), and seek to plan for experience complexity as a way of considering what learner-participants might focus on with intrinsic motivation for value and richer engagement (Lister, 2021b, 2021c). By interpreting experience variation and complexity as experience relevance structures, ‘good fit’ pedagogical approaches could be selected to acknowledge each CoD in learning design. Planning for experience variation with related pedagogical relationships are further elaborated in Lister, 2021b (pp. 5–7), using descriptive examples for different kinds of activity concepts similar to those indicated in the introduction to this paper.

Evaluating implicit smart learning with the PECSL model

Three mechanisms of implicit learning evaluation are outlined here, to potentially work alongside the PECSL model. These conceptual and cognitive domain equivalence mechanisms offer possible ways to describe levels and types of learning and may assist in developing criteria to evaluate implicit smart learning according to relevance and nature of activity.

  • Descriptive alignment for surface to deep learning (after Marton & Säljö, 1976, 2005; Säljö, 1979b);

  • Hounsell’s (1984, 2005) ‘arrangement, viewpoint, argument’;

  • Bloom’s Revised (Anderson & Krathwohl, 2001) and SOLO (Biggs & Collis, 1982) learning taxonomies.

Figure 1 shows an overview of pedagogical alignment for PECSL experience CoD, levels of complexity and related pedagogies, along with concepts of surface to deep learning, supplemented by ‘arrangement, viewpoint and argument’ terms as ways to think about levels of learning and experience complexity. Bloom’s Revised and SOLO equivalences offer further ways to consider experience complexity and surface to deep learning using familiar learning taxonomies. ‘Pedagogical relevance’ factors are added to attempt to illustrate how each CoD and associated evaluation might relate to learning design.

Fig. 1
figure 1

PECSL ‘Grid’ showing experience complexity alignment with learning evaluation descriptors and learning taxonomy equivalences

To assist in clarification of CoD experience complexity and evaluation equivalences, Table 3 combines Table 1 with the descriptive range of surface/deep learning including arrangement, viewpoint and argument terms, and Bloom's Revised & SOLO taxonomies.

Table 3 PECSL experience complexity levels, with learning evaluation descriptors, Bloom's Revised & SOLO taxonomies

Reflections on evaluating implicit smart learning

Following sections expand further on these concepts and potential mechanisms of evaluation for implicit learning in autonomous smart environment activities. A short discussion of surface to deep learning, the ‘arrangement, viewpoint and argument’ conceptual terms of Hounsell and the relevance of additional equivalence of Bloom’s Revised and SOLO learning taxonomies are provided. This attempts to offer reasoning for why these concepts are relevant and how they relate to the prior research context to complement planning for experience variation and complexity as indicated by the PECSL model.

Reasons for wishing to evaluate this type of implicit learning are many, but among them may be to provide user-learner experience for funding and sponsorship of activities to establish what participants in activities find of value and engagement, to provide facilitators with feedback of what participants did and how involved they were, or for participants themselves as an informal record of what they contributed. It is anticipated that approaches reflected upon are flexible, transferable and applicable in ways the reader may feel appropriate (Collier-Reed et al., 2009, p. 4, citing Lincoln & Guba, 1985, p. 298), described more fully in Lister, 2021b (pp. 15–16). In this way, reflections offer possible ideas and solutions, not a one-size-fits-all declared success model.

Deep and surface learning

Badie (2018) argues that “in order to propose more analytic descriptions of smart learning, we need to put ourselves into the learners’ shoes and observe the phenomenon of ‘learning’ from their perspective”, referring to Säljö’s “seminal studies on learners’ conceptions of ‘learning’” (Saljo, 1979a, in Badie, 2018, p. 394). Though Säljö’s, 1979a work is unfortunately unavailable to this author, other work by Säljö (1979b), Marton et al. (1993) and Richardson (1999) cite the same or similar studies, reliably describing the hierarchical conceptions of learning that the study defined. In broad terms, the conceptions of learning begin at a simple acquisition of facts that can be memorised, extending to application of facts and procedures, developing towards abstraction and further interpretation. Focusing on adult learners, Badie ‘sketches’ on Säljö’s conceptions to re-imagine them, noting that the model could be reinterpreted as layered, with inner/deeper layers supported by outer/shallower layers. Badie’s six conceptions are: (1) Knowing more; (2) Keeping in mind; (3) Selecting; (4) Meaning Constructing; (5) Interpreting the Reality; (6) Self Realising (Badie, 2018, p. 394). Webb (1997) draws attention to ideas about surface/deep learning that pre-date the phenomenographic surface/deep ‘metaphor’, citing Bloom’s original taxonomy (1956), Gagné (1970) and Pask and Scott (1972), amongst others. However, Webb notes that the surface/deep metaphor differs as develops “the importance of ‘context’ in opposition to the ‘innateness’ of a cognitive psychology steeped in the tradition of individual difference … towards the idea that the learning environment, the curriculum and in particular the assessment regime, informed the approach to learning which individuals would adopt” (Webb, 1997, pp. 196–197). In relation to smart learning and autonomous learning environments, context is argued here as highly significant, impacting many aspects of potential experience complexity and depth of learning that might be possible.

It is useful to note that Säljö’s seminal work on conceptions of surface/deep learning (noting also Säljö’s work with Marton, 1976, 2005) has continued to be featured in numerous pedagogical discourses (e.g. Biggs & Collis, 1982; Schmeck, 1988 (various); Biggs, 1995; Selwyn, 2011). Within the PECSL model these qualitative differences of learning are highlighted through expressions of gaining value and motivation for participation in a smart learning activity, resulting in varying outcomes for depth and quality of understanding (e.g. Lister, 2021c, p. 238–239).

Arrangement, viewpoint and argument

Within a context of phenomenography, Hounsell (1984, 2005) developed ‘arrangement, viewpoint and argument’ as ways to describe levels of complexity in essay writing. Hounsell’s descriptive terms describe complexity of understanding, shown in Table 4, and are an alternate complementary terminology that can be adapted and modified to describe the complexity of experience in a smart learning activity (SLA), described in Table 5.

Table 4 Arrangement, viewpoint, argument, from Hounsell (2005, pp. 111, 113)
Table 5 Adapting Hounsell’s arrangement, viewpoint, argument for smart learning activities (SLA)

Hounsell’s conceptions included ‘sub-component’ terminology (1984, p. 21) providing further detail, described as data, organisation and interpretation. Likewise, adapting these for application in SLA’s, we can see a potentially useful correlation with Hounsell’s initial definitions and further interpretation for the SLA. Table 6 combines both the original and SLA interpretation.

Table 6 Hounsell’s definitions of data, organisation and interpretation sub components, adapted for a SLE

Learning taxonomies

Reasoning for utilising an equivalency of Bloom’s Revised and/or SOLO learning taxonomies is based in practical and research literature contexts. Tutors are familiar with what Bloom’s and SOLO are, they are well known in learning communities, and use of these taxonomies can aid in communicability of evaluation methods. Additionally, Bloom’s and SOLO have precedent to be used in both phenomenographic contexts (Biggs, 1995; Marton & Svensson, 1979; Newton & Martin, 2013; Taylor & Cope, 2007) and smart learning contexts (Lorenzo & Gallon, 2019; Badie, 2018; Nikolov et al., 2016).

Blooms ‘revised’ taxonomy

Anderson and Krathwohl’s (2001) revised version of Bloom’s taxonomy articulates the cognitive process dimension that has become widely known and applied in educational discourses and practice. Krathwohl (2002) notes that Bloom saw his original taxonomy as “more than a measurement tool”, listing amongst other aspects, a “panorama of the range of educational possibilities against which […] any particular educational course or curriculum could be contrasted” (2002, p. 213). That is, an articulation of the range of depth or complexity that might surround an aspect of learning, that might then be ‘contrasted’ with an equivalence elsewhere. This mechanism of equivalence is pertinent to smart learning in real-world environments, and is employed in the PECSL in this manner. A relevant similar example is found in the DigComp 2.1 citizen digital skills framework (Carretero et al., 2017), utilising Bloom’s Revised taxonomy to provide broad cognitive domain equivalence in the range of skills and competencies related to digital literacy.

Structure of the Observed Learning Outcome (SOLO) taxonomy

Notably, Biggs (1995) refers to “the techniques of phenomenography … a hierarchy of conceptions that can be used to form assessment targets” (p. 6). He acknowledges a requirement to define learning in terms of increasing complexity for structure, abstractness, originality and other factors, referring to the Structure of the Observed Learning Outcome (SOLO) taxonomy (Biggs, 1995, p. 6). The SOLO taxonomy (Biggs & Collis, 1982) proposes that “learning quality depends on […] features intrinsic to the learner, such as his motivation, his developmental stage, his prior knowledge of the area, and so forth” (p. 17). SOLO complements Bloom’s Revised in that the category descriptors can be interpreted broadly and can adapt in various ways appropriate to design and nature of activity. They offer an alternative or can be combined for activity design strategies, and support usefulness in the context of a hierarchy of experience complexity equivalence, acting as another flexible model of interpretation for implicit learning in a given activity.

O’ Riordan et al. (2016) provide succinct and relevant interpretations of the Bloom’s Revised and SOLO levels of learning, shown in Table 7, that might be utilised as a guide to applying as part of working with the PECSL and related evaluation concepts shown in Table 3.

Table 7 Combined taxonomies with descriptors from O’Riordan et al. (2016)

Evaluating implicit learning as value and engagement

Multiple aspects of activity value and engagement can be interpreted to evaluate implicit smart learning in the context of Lui et al.’s (2017a) social, autonomous, informal smart learning activities set in public spaces. For example, depth of content in relation to place and value, usefulness of discussion and reflection for personal memory sharing, discussions about cultural background in a group and so on. Utilisation of Bloom’s Revised or SOLO can offer potential for enabling meaningful contrasting and equivalence to combine with more formal learning if of relevance to the nature and purpose of an activity.

Subsequent sections consider evaluating implicit learner generated content, and talking, discussion and reflection, to envisage how evaluation of implicit smart learning might work in pragmatic terms. Evaluating implicit smart learning need not take the form of each individual but may rather take into account groups of participants involved in such activities, to gauge their sense of value and engagement from taking part. For example, involving participant groups in evaluation of created content quality, different features of an activity or of the activity as a whole, utilising aspects of what is reflected on in the context of PECSL model ideas. This might help to indicate the value that participants gain from citizen activities for community engagement and development of self such as Vinod Kumar (2020) and others (Sacré & De Visscher, 2017, Caroll et al., 2017) describe.

Emphasis on content

Learner generated content (LGC) (Pérez-Mateo et al., 2011) may often form part of explicit (directed) learning and be assessed against specified criteria. However, here I argue that LGC offers a wealth of opportunity to evaluate implicit learning, in terms of unspecified LGC being created and shared by participants in smart learning activities. Though some prompts might be offered, the nature of the content is not specified, encouraging active participation and engagement in self directed ways.

In the previously referred to author’s research, participants were invited (it was not mandatory) to create content such as photographs, written notes or comments to respond to participating in the smart learning journey activity. Figures 2 and 3 provided below show some of the content uploaded to an online group area. In the context of PECSL experience relevance structures, these images illustrate how image content might be evaluated as experience variation complexity, that could then be further supplemented by associated learning evaluation mechanisms. By relating surface/deep learning equivalence to the experience complexity being shown it becomes possible to evaluate the quality of this informal content. Learning taxonomies can be additionally used to enable equivalence with other factors or to combine with more formal learning if relevant.

Fig. 2
figure 2

LGC images reflecting Category C, Level 2 and 3, Category D, Level 3

Fig. 3
figure 3

LGC images reflecting Category B and C, Levels 3 and 4, then Category C and D, Level 4

Experimenting with this concept, this technique was applied to the content and images uploaded by the participants in the research, using letter and number combinations to indicate categories and levels of experience complexity. The reader may find it helpful to refer to Tables 1 and 3 to follow what is outlined below:

  • In Fig. 2 we see experience variation category C (‘Being There’) level 2 and 3, focusing on locations in relation to tasks, and D (‘Knowledge & Place as Value’) level 3, showing facts and knowledge associated with locations.

  • In Fig. 3, the photographs on the left show category B (‘Discussing’) and category C level 3—participants are being sociable and showing themselves digitally interacting with the locations.

  • The photographs on the right of Fig. 3 show category C and D level 4, the deepest level of complexity, because learners are being very creative, imitating statues with their own poses and taking photographs of them, with a creative sense of humour (similar to Wegerif, 2022, p. 9). This demonstrates deeper, enriched engagement, attaching value to their interactions and creativity.

This concept could potentially be further exploited to contribute to interpretation of content for demonstrated experience variation and levels of complexity using image recognition machine learning techniques. Early ideas were presented by the author as part of the AI in Education series at the University of Oxford in November 2019 (Lister, 2019).

Learner-directed creative activities

Creating knowledge content can become more sophisticated as part of an implicit learning process for richer engagement, both in scope for types of content, and in how participant-learners approach what they create, find, collect or critically review. Many opportunities exist for searching, locating, defining, creating, mapping and reflecting on knowledge content hypersituated (Moreira et al., 2021) in the local spaces of connected urban environments. Within this potentially rich exploratory terrain, activities such as those indicated in the introduction of this paper (further discussed in Lister, 2021b, p. 13), indicate possibilities for learner directed content in the most open sense. Designing activities such as these using the PECSL model to plan for experience complexity can support evaluation of what learner participants may choose to create, such as features, quality and relevance of creativity or contributions created by different groups of participants. The technological mediations of interacting with an environment in a smart learning activity may also impact experience and sense of value for participants, perhaps influencing types or quality of creativity (discussed further in Lister, 2022a).

Emphasis on dialogue and further reflection

Thinking about the PECSL experience relevance structure of ‘Discussing’, activity design approaches could accommodate discussion topics to support exploration and sharing, probing for deeper reflection and engagement, and further follow up if in relation to community activities. This supports Wegerif (2022), who emphasises the development of dialogic self and community, to ‘talk together better’, of openness and self-reflection, expanding self-identity and creating communities of mutual trust (2022, pp. 7–9).

In activities that may often be autonomous and voluntary, discussion may occupy an important central role amongst groups of participants, with simpler topics of discussion including routes, locations and digital apps if they are being used. Topics expand in complexity to include location or theme related opinions, memories, reflections, and wider relevance between aspects of activity to participant experience and life, demonstrating a deeper enriched participation and sense of value in the discussion itself as well as the content of it. These kinds of conversations are recalled in interviews with participants in the previously described research, and perhaps indicate that the ways participants talk to each other highlights the value they may be experiencing in their activity participation.

Discussion topics in autonomous informal smart learning can be difficult to capture to evaluate for quality and value, however perhaps utilising more established reflection techniques may offer solutions. Real time note making, comments online or post-activity discussion sessions can provide content and opportunity for evaluation through experience variation and complexity, interpreted as surface/deep learning equivalence. Reflective community discussion in civic initiatives (e.g. techniques in Lin et al., 2011, p. 55), or the multimedia project e-portfolios of Paterson (2019, p. 401) might be utilised as informal aspects of activity process or design, depending on nature and purpose of activity.

Post activity focus groups using emergent reflective discussion can uncover much about what participants have learned during their participation. The author has employed this technique in class discussion sessions after students participated in smart learning journey activities as part of a curriculum of study. Student group discussion unpacked what happened and what students thought they had learned, resulting in reporting they had learned more by discussing it with each other afterwards than they had been aware of before discussion took place. They produced sets of group notes, offering further opportunity for evaluation of their learning. These sessions are explored in more detail in Lister (2022b). As stated previously, participant/learner reflection and evaluation of an activity could include learners evaluating their own learning as part of a designed process of measurement for activity effectiveness. Attempting to capture this kind of ‘talking’ and engaging participants in their own learning process may lead to effective ways of evaluating the quality of implicit learning, and alert the consciousness of a participant learner towards their own act of learning itself (Marton & Svensson, 1979, pp. 473–474).

Conclusions

Implicit smart learning does not specify intended learning objects, processes or impact goals, it is in the hands of participants what they may do, say or create for value and richer engagement to them. It is this kind of learning—often not even thought of as learning (Liu et al., 2017a)—that is the focus of this paper. An attempt has been made to reflect on how the author’s prior research study outcomes might impact evaluation of implicit smart learning through understanding the possible range of participant experience in a smart learning journey activity (Lister, 2021a, 2021b). The study developed a ‘Pedagogy of Experience Complexity for Smart Learning’ (PECSL) to support planning and design for participant experience complexity in such activities and this led to reflections on potential related evaluation concepts and mechanisms indicated by the PECSL model. The approaches discussed here are anticipated as flexible, transferable and applicable in ways the reader may feel appropriate to evaluate implicit learning in their own situations, as indicated by Lincoln and Guba (1985) in Collier-Reed et al. (2009), described more fully in Lister, 2021b (pp. 15–16). Acknowledging the many other systems of learning evaluation that exist, both for assessing explicit formal learning and for impact based change, for example Outcome Harvesting (Wilson-Grau, 2015) or Qualitative Impact Assessment Protocol (Remnant & Avard, 2016), it is important to reiterate that the ideas proposed in this paper are a contribution to discussion rather than a claim to a superior method for implicit learning evaluation. Perhaps understanding more about the variation and complexity of experiencing implicit smart learning might shed light on interpreting learning in terms of levels and depth of value, motivation and engagement for participant learners. This may in turn offer useful considerations to plan for evaluating the kind of social smart learning that Lui et al. (2017a) describe.

Availability of data and materials

Not applicable.

Notes

  1. The 16th party congress of the Chinese Communist Party: https://en.wikipedia.org/wiki/16th_National_Congress_of_the_Chinese_Communist_Party.

  2. UNESCO Sustainable Development Goal Four (https://sdgs.un.org/goals/goal4).

  3. Wood Street Walls using What3Words app https://youtu.be/O-lhbhfibDI.

Abbreviations

CoD:

Category of description

LGC:

Learner generated content

PECSL:

Pedagogy of Experience Complexity for Smart Learning

SLA:

Smart learning activity

References

  • Anderson, L. W., & Krathwohl, D. R. (Eds.). (2001). A taxonomy for learning, teaching, and assessing: A revision of Bloom’s taxonomy of educational objectives. Addison Wesley Longman.

    Google Scholar 

  • Badie, F. (2018). Knowledge building conceptualisation within smart constructivist learning systems. In V. L. Uskov, J. P. Bakken, R. J. Howlett, & L. C. Jain (Eds.), Smart universities: Concepts, systems, and technologies (pp. 385–419). Springer. https://doi.org/10.1007/978-3-319-59454-5_13

    Chapter  Google Scholar 

  • Biggs, J. (1995). Assessing for learning: Some dimensions underlying new approaches to educational assessment. The Alberta Journal of Educational Research, 41(1), 1–17.

    Google Scholar 

  • Biggs, J. B., & Collis, K. F. (1982). Evaluating the quality of learning-the SOLO taxonomy (1st ed.). Academic Press.

    Google Scholar 

  • Bloom, B. (1956). Taxonomy of educational objectives, handbook 1: Cognitive domain. Longmans.

    Google Scholar 

  • Bowden, J. (2005). Reflections on the phenomenographic team research project. In J. Bowden & P. Green (Eds.), Doing developmental phenomenography (pp. 11–31). RMIT University Press.

    Google Scholar 

  • Bruner, J. S. (1966). Toward a theory of instruction. Belkapp Press.

    Google Scholar 

  • Carretero, S., Vuorikari, R., & Punie, Y. (2017). DigComp 2.1: The Digital Competence Framework for citizens with eight proficiency levels and examples of use. European Commission. https://doi.org/10.2760/38842

    Book  Google Scholar 

  • Carroll, J. M., Shih, P. C., Kropczynski, J., Cai, G., Rosson, M. B., & Han, K. (2017). The internet of places at community-scale: Design scenarios for hyperlocal neighborhood. In S. Konomi & G. Roussos (Eds.), Enriching urban spaces with ambient computing, the Internet of Things, and smart city design (pp. 1–24). IGI Global. https://doi.org/10.4018/978-1-5225-0827-4.ch001

    Chapter  Google Scholar 

  • Collier-Reed, B. I., Ingerman, A., & Berglund, A. (2009). Reflections on trustworthiness in phenomenographic research: Recognising purpose, context and change in the process of research. Education as Change, 13(2), 339–355. https://doi.org/10.1080/16823200903234901

    Article  Google Scholar 

  • Cope, C. (2004). Ensuring validity and reliability in phenomenographic research using the analytical framework of a structure of awareness. Qualitative Research Journal, 4(2), 5–18.

    Google Scholar 

  • De Lange, M., & De Waal, M. (2017). Owning the city: New media and citizen engagement in urban design. In K. Etingoff (Ed.), Urban design: Community-based planning (pp. 89–110). Apple Academic Press.

    Chapter  Google Scholar 

  • Debord, G. (1958). Théorie de la dérive. Internationale Situationniste 2 (Dec. 1958). In Knabb, K. (Ed.), Situationist International Anthology (translation), Bureau of Public Secrets (2006) (pp. 19–23). http://www.bopsecrets.org/SI/2.derive.htm

  • Dron, J. (2018). Smart learning environments, and not so smart learning environments: A systems view. Smart Learning Environments, 5, 25. https://doi.org/10.1186/s40561-018-0075-9

    Article  Google Scholar 

  • Eraut, M. (2000). Non-formal learning, implicit learning and tacit knowledge. Bristol Policy Press, 4, 12–31.

    Google Scholar 

  • Fang, J. (2013). Colorful robots teach children computer programming: How do you make coding something that kids want to do? Meet Bo and Yana: Covert teaching machines. ZdNET. https://www.zdnet.com/article/colorful-robots-teach-children-computer-programming/

  • Garrett, J. J. (2010). The elements of user Experience: User-centered design for the web and beyond (2nd ed.). New Riders Press.

    Google Scholar 

  • Gagné, R. M. (1970). The conditions of learning (2nd ed.). Holt.

    Google Scholar 

  • Gee, J. P. (2007). Good video games + good learning. Collected essays on video games, learning and literacy. Peter Lang Publishing.

    Book  Google Scholar 

  • Gibbons, S. (2016). Design thinking 101. Nielsen Norman Group.

    Google Scholar 

  • Green, B. (2019). The smart enough city, putting technology in its place to reclaim our urban future. Strong ideas. MIT Press.

    Book  Google Scholar 

  • Greeno, J. G., & Engeström, Y. (2014). Learning in activity. In R. K. Sawyer (Ed.), The Cambridge handbook of the learning sciences (2nd ed., pp. 128–147). Cambridge University Press.

    Chapter  Google Scholar 

  • Hense, J., & Mandl, H. (2014). Learning in or with games? In D. Sampson, D. Ifenthaler, J. Spector, & P. Isaias (Eds.), Digital systems for open access to formal and informal learning. Springer. https://doi.org/10.1007/978-3-319-02264-2_12

    Chapter  Google Scholar 

  • Hou je Bek, W. (2002). Algorithmic psychogeography. Spacejackers.

    Google Scholar 

  • Hounsell, D. (1984). Essay planning and essay writing. Higher Education Research & Development, 3(1), 13–31. https://doi.org/10.1080/0729436840030102

    Article  Google Scholar 

  • Hounsell, D. (2005). Contrasting conceptions of essay-writing. In F. Marton, D. Hounsell, & N. Entwistle (Eds.), The experience of learning: Implications for teaching and studying in higher education (3rd ed., pp. 106–125). University of Edinburgh, Institute for Academic Development.

    Google Scholar 

  • Implicit. (2020). Oxford English Dictionary. Retrieved from https://www.lexico.com/definition/implicit

  • Jordan, S. (2015). Writing the smart city: “Relational space” and the concept of “belonging”. Writ. Pract. J. Creative Writ. Res. 1. http://eprints.nottingham.ac.uk/32234/1/WritinginPractice_Version2.pdf

  • Kaapu, T., & Tiainen, T. (2010). User Experience: Consumer Understandings of Virtual Product Prototypes. In K. Kautz & P. A. Nielsen (Eds.), Scandinavian information systems research. First Scandinavian Conference on Information Systems, SCIS 2010, Proceedings. Lecture notes in business information processing (pp. 18–33). Springer. https://doi.org/10.1007/978-3-642-14874-3_2

    Chapter  Google Scholar 

  • Kaufman, S. B., DeYoung, C. G., Gray, J. R., Jiménez, L., Brown, J., & Mackintosh, N. (2010). Implicit learning as an ability. Cognition, 116(3), 321–340. https://doi.org/10.1016/j.cognition.2010.05.011

    Article  Google Scholar 

  • Kazil, P., & Hou je Bek, W. (2010). A walk in the invisible city: World in a shell urban adventure. V2_. https://v2.nl/events/world-in-a-shell-urban-adventure

  • Koehler, A. (2013). Digitizing craft: Creative writing studies and new media: A proposal. College English, 75(4), 379–397.

    Google Scholar 

  • Krathwohl, D. (2002). A revision of Bloom’s taxonomy: An overview. Theory into Practice, 41(4), 212–218. https://doi.org/10.1207/s15430421tip4104_2

    Article  Google Scholar 

  • Kivunja, C. (2014). Do you want your students to be job-ready with 21st century skills? Change pedagogies: A pedagogical paradigm shift from Vygotskyian social constructivism to critical thinking, problem solving and Siemens’ digital connectivism. International Journal of Higher Education. https://doi.org/10.5430/ijhe.v3n3p81

    Article  Google Scholar 

  • Lorenzo, N., & Ray Gallon, R. (2019). Smart pedagogy for smart learning. In L. Daniela (Ed.), Didactics of smart pedagogy: Smart pedagogy for technology enhanced learning. Springer. https://doi.org/10.1007/978-3-030-01551-0_3

    Chapter  Google Scholar 

  • Lin, T. C. Y. W., Galloway, D., & Lee, W. O. (2011). The effectiveness of action learning in the teaching of citizenship education: A Hong Kong case study. In K. J. Kennedy, W. O. Lee, & D. L. Grossman (Eds.), Citizenship pedagogies in Asia and the Pacific, CERC studies in comparative education (pp. 53–80). Springer. https://doi.org/10.1007/978-94-007-0744-3_4

    Chapter  Google Scholar 

  • Lincoln, Y., & Guba, E. (1985). Naturalistic inquiry. Sage Publications.

    Book  Google Scholar 

  • Lister, P. (2019). Learner experience complexity as data variables for smart learning. [Presentation]. AI in Education series, IT Learning Centre, University of Oxford, UK.

  • Lister, P. (2020). Smart learning in the community: Supporting citizen digital skills and literacies. In N. Streitz & S. Konomi (Eds.), Distributed, ambient and pervasive interactions. HCII 2020. Lecture notes in computer science (pp. 533–547). Springer. https://doi.org/10.1007/978-3-030-50344-4_38

    Chapter  Google Scholar 

  • Lister, P. (2021a). The pedagogy of experience complexity for smart learning: Considerations for designing urban digital citizen learning activities. Smart Learning Environments. https://doi.org/10.1186/s40561-021-00154-x

    Article  Google Scholar 

  • Lister, P. (2021b). Applying the PECSL: Using case studies to demonstrate the Pedagogy of Experience Complexity for Smart Learning. Smart Learning Environments, 8, 13. https://doi.org/10.1186/s40561-021-00158-7

    Article  Google Scholar 

  • Lister, P. (2021c). What are we supposed to be learning? Motivation and autonomy in smart learning environments. In N. Streitz & S. Konomi (Eds.), Distributed, ambient and pervasive interactions. HCII 2021. Lecture notes in computer science (Vol. 12782, pp. 235–249). Springer. https://doi.org/10.1007/978-3-030-77015-0_17

    Chapter  Google Scholar 

  • Lister, P. (2021d). Understanding experience complexity in a smart learning journey. Springer Nature Social Sciences. https://doi.org/10.1007/s43545-020-00055-9

    Article  Google Scholar 

  • Lister, P. (2022a). Future-present learning and teaching: a case study in smart learning. In E. Sengupta & P. Blessinger (Eds.), Changing the conventional classroom, Innovations in Higher Education Teaching and Learning (IHETL). Emerald Publishing.

    Google Scholar 

  • Lister, P. (2022b). Ways of experiencing technology in a smart learning environment. In N. Streitz & S. Konomi (Eds.), Distributed, ambient and pervasive interactions. HCII 2022. Lecture notes in computer science. Springer.

    Google Scholar 

  • Liu, D., Huang, R., & Wosinski, M. (2017a). Characteristics and Framework of smart learning. Smart learning in smart cities. Lecture notes in educational technology (pp. 31–48). Springer.

    Book  Google Scholar 

  • Liu D., Huang, R., & Wosinski, M. (2017b). Future trends in smart learning: Chinese perspective. In Smart learning in smart cities. Lecture notes in educational technology (pp. 185–215). Springer. https://doi.org/10.1007/978-981-10-4343-7_8

  • Marton, F., Dall'Alba, G., & Beaty, E. (1993). Conceptions of learning. In Salo, R (Ed.), Learning discourse: Qualitative research in education (pp. 277–300). International Journal of Educational Research, 19(3), 199–325.

  • Marton, F., & Pong, W. P. (2005). On the unit of description in phenomenography. Higher Education Research & Development, 24(4), 335–348. https://doi.org/10.1080/07294360500284706

    Article  Google Scholar 

  • Marton, F., Runesson, U., & Tsui, A. B. M. (2004). The space of learning. In F. Marton & A. B. M. Tsui (Eds.), Classroom discourse and the space of learning (pp. 3–40). Lawrence Erlbaum.

    Chapter  Google Scholar 

  • Marton, F., & Säljö, R. (1976). On qualitative differences in learning: 1. Outcome and process. British Journal of Educational Psychology, 46(1), 4–11. https://doi.org/10.1111/j.2044-8279.1976.tb02980.x

    Article  Google Scholar 

  • Marton, F., & Säljö, R. (2005). Approaches to learning. In F. Marton, D. Hounsell, & N. Entwistle (Eds.), The experience of learning: Implications for teaching and studying in higher education (3 (Internet), pp. 39–58). University of Edinburgh, Centre for Teaching, Learning and Assessment.

    Google Scholar 

  • Marton, F., & Svensson, L. (1979). Conceptions of research in student learning. Higher Education, 8, 471–486. https://doi.org/10.1007/BF01680537

    Article  Google Scholar 

  • Moreira, F. T., Vairinhos, M., & Ramos, F., et al. (2021). Conceptualization of hypersituation as result of IoT in education. In Ó. Mealha (Ed.), Ludic, co-design and tools supporting smart learning ecosystems and smart education, Proceedings of the 5th international conference on smart learning ecosystems and regional development. Smart innovation, systems and technologies. (Vol. 197). Springer.

    Google Scholar 

  • Newton, G., & Martin, E. (2013). Blooming, SOLO taxonomy, and phenomenography as assessment strategies in undergraduate science education. Journal of College Science Teaching, 43(2), 78–90.

    Article  Google Scholar 

  • Nikolov, R., Shoikova, E., Krumova, M., Kovatcheva, E., Dimitrov, V., & Shikalanov, A. (2016). Learning in a smart city environment. Journal of Communication and Computer, 13, 338–350. https://doi.org/10.17265/1548-7709/2016.07.003

    Article  Google Scholar 

  • Orgill, M. (2012). Variation theory. In N. M. Seel (Ed.), Encyclopedia of the sciences of learning. Springer. https://doi.org/10.1007/978-1-4419-1428-6_272

    Chapter  Google Scholar 

  • O’Riordan, T., Millard, D. E., & Schulz, J. (2016). How should we measure online learning activity? Research in Learning Technology. https://doi.org/10.3402/rlt.v24.30088

    Article  Google Scholar 

  • Paterson, R. (2019). The power of EMPs: Educational multimedia projects. In L. Daniela (Ed.), Didactics of smart pedagogy: Smart pedagogy for technology enhanced learning (pp. 393–414). Springer. https://doi.org/10.1007/978-3-030-01551-0_20

    Chapter  Google Scholar 

  • Pask, G., & Scott, B. C. E. (1972). Learning strategies and individual competence. International Journal of Man-Machine Studies, 4, 217–253.

    Article  Google Scholar 

  • Pérez-Mateo, M., Maina, M., Guitert, M., & Romero, M. (2011). Learner generated content: Quality criteria in online collaborative learning. The European Journal of Open, Distance and E-Learning, 14. https://www.eurodl.org/?p=special&sp=articles&article=459

  • Pinder, D. (2005). Arts of urban exploration. Cultural Geographies, 12(4), 383–411.

    Article  Google Scholar 

  • Prensky, M. (2003). Digital game-based learning. ACM Computers in Entertainment, 1(1), 21.

    Article  Google Scholar 

  • Reber, A. S. (1989). Implicit learning and tacit knowledge. Journal of Experimental Psychology, 118(3), 219–235.

    Article  Google Scholar 

  • Reed, B. (2006). Phenomenography as a way to research the understanding by students of technical concepts. Núcleo de Pesquisa em Tecnologia da Arquitetura e Urbanismo (NUTAU): Technological Innovation and Sustainability, São Paulo, Brazil (pp. 1–11).

  • Remnant, F., & Avard, R. (2016). Qualitative Impact Assessment Protocol (QUIP). BetterEvaluation. Retrieved from http://betterevaluation.org/en/plan/approach/QUIP

  • Rezgui, K., Mhiri, H., & Ghédira, K. (2014). An ontology-based profile for learner representation in learning networks. International Journal of Emerging Technologies in Learning (iJET). https://doi.org/10.3991/ijet.v9i3.3305

    Article  Google Scholar 

  • Richardson, J. (1999). The Concepts and Methods of Phenomenographic Research. Review of Educational Research, 69(1), 53–82. https://doi.org/10.3102/00346543069001053.

    Article  Google Scholar 

  • Roisko, H. (2007). Adult learners’ learning in a university setting a phenomenographic study. [Doctoral Dissertation, University of Tampere]. Tampere University Press. http://urn.fi/urn:isbn:978-951-44-6928-2

  • Sacré, H., de Droogh, L., De Wilde, A., & De Visscher, S. (2017). Storytelling in urban spaces: Exploring storytelling as a social work intervention in processes of urbanisation. In H. Sacré & S. De Visscher (Eds.), Learning the city, cultural approaches to civic learning in urban spaces (pp. 35–49). Springer. https://doi.org/10.1007/978-3-319-46230-1_3

    Chapter  Google Scholar 

  • Sacré, H., & De Visscher, S. (2017). A cultural perspective on the city. In H. Sacré & S. De Visscher (Eds.), Learning the city, cultural approaches to civic learning in urban spaces (pp. 1–17). Springer. https://doi.org/10.1007/978-3-319-46230-1_1

    Chapter  Google Scholar 

  • Saffer, D. (2010). Designing for interaction, second edition: Creating innovative applications and devices. New Riders.

    Google Scholar 

  • Sandberg, J. (2005). How Do We Justify Knowledge Produced Within Interpretive Approaches? Organizational Research Methods, 8(1), 41–68. https://doi.org/10.1177/1094428104272000.

    Article  Google Scholar 

  • Säljö, R. (1979a). Learning in the learner’s perspective: Some commonplace misconceptions. Reports from the Institute of Education, University of Gothenburg.

  • Säljö, R. (1979b). Learning about learning. Higher Education, 8, 443–451. https://doi.org/10.1007/BF01680533

    Article  Google Scholar 

  • Schmeck, R. R. (Ed.). (1988). Learning strategies and learning styles. Perspectives on individual differences. Springer. https://doi.org/10.1007/978-1-4899-2118-5

    Book  Google Scholar 

  • Seger, C. A. (1994). Implicit learning. Psychological Bulletin, 115(2), 163–196.

    Article  Google Scholar 

  • Selwyn, N. (2011). Education and technology: Key issues and debates. Continuum.

    Google Scholar 

  • Siemens, G. (2006). Knowing knowledge. Internet Archive. Available from https://archive.org/details/KnowingKnowledge/

  • Sjöström, B., & Dahlgren, L. O. (2002). Applying phenomenography in nursing research. Journal of Advanced Nursing, 40(3), 339–345. https://doi.org/10.1046/j.1365-2648.2002.02375.x

    Article  Google Scholar 

  • Slater, M. (2017). Implicit learning through embodiment in immersive virtual reality. In D. Liu, C. Dede, R. Huang, & J. Richards (Eds.), Virtual, augmented, and mixed realities in education. Smart computing and intelligence (pp. 19–33). Springer.

    Google Scholar 

  • Souleles, N., Savva, S., Watters, H., Annesley, A., & Bull, B. (2014). A phenomenographic investigation on the use of iPads among undergraduate art and design students. British Journal of Educational Technology, 46(1), 131–141. https://doi.org/10.1111/bjet.1213

    Article  Google Scholar 

  • Svensson, L. (1997). Theoretical foundations of phenomenography. Higher Education Research and Development, 16(2), 159–172.

    Article  Google Scholar 

  • Taylor, C., & Cope, C. J. (2007). Are there educationally critical aspects in the concept of evolution? [Paper presentation]. UniServe Science: Science Teaching and Learning Symposium, University of Sydney, held September 28–29, 2007. https://openjournals.library.sydney.edu.au/index.php/IISME/article/view/6352/6991

  • Uskov, V. L., Bakken, J. P., Pandey, A., Singh, U., Yalamanchili, M., & Penumatsa, A. (2016). Smart university taxonomy: Features, components, systems. In V. Uskov, R. Howlett, & L. Jain (Eds.), Smart education and e-learning 2016. Smart innovation, systems and technologies. (Vol. 59). Springer. https://doi.org/10.1007/978-3-319-39690-3_1

    Chapter  Google Scholar 

  • Vinod Kumar, T. M. (2020). Smart environment for smart cities. In T. M. Vinod Kumar (Ed.), Smart environment for smart cities, advances in 21st century human settlements (pp. 1–53). Springer. https://doi.org/10.1007/978-981-13-6822-6_1

    Chapter  Google Scholar 

  • Webb, G. (1997). Deconstructing deep and surface: Towards a critique of phenomenography. Higher Education, 33, 195–212.

    Article  Google Scholar 

  • Wegerif, R. (2022). Beyond democracy: Education as design for dialogue. In Liberal democratic education: A paradigm in crisis (pp. 157–179). Brill mentis (pre-print).

  • Wilson-Grau, R. (2015). Outcome harvesting. BetterEvaluation. Retrieved from http://betterevaluation.org/plan/approach/outcome_harvesting

Download references

Acknowledgements

I acknowledge the University of Malta Faculty of Education and my principal supervisor Dr P. Bonanno in their generous support of my PhD.

Funding

The author receives no additional external funding.

Author information

Authors and Affiliations

Authors

Contributions

This is a single author work. The author read and approved the final manuscript.

Authors' information

Pen Lister is currently an RSO for the SMARTEL Erasmus + Project and occasional lecturer in digital pedagogies, based at the Faculty of Education, University of Malta. She has MA Learning & Teaching in Higher Education and MSc Multimedia Systems, is a Fellow of the Higher Education Academy and Member of the British Computer Society. Her recent doctorate investigated learner experience in smart learning journeys using the methodology of Phenomenography (title: “Experiencing the Smart Learning Journey: A pedagogical Inquiry”). A former lecturer and senior lecturer at London Metropolitan University, she has recently given lectures and talks at the Royal College of Art and University of Oxford and regularly presents at the International Human–Computer Interaction Conference.

Corresponding author

Correspondence to Pen Lister.

Ethics declarations

Competing interests

The author declares no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lister, P. Measuring learning that is hard to measure: using the PECSL model to evaluate implicit smart learning. Smart Learn. Environ. 9, 25 (2022). https://doi.org/10.1186/s40561-022-00206-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40561-022-00206-w

Keywords