Skip to main content

The effects of over-reliance on AI dialogue systems on students' cognitive abilities: a systematic review

Abstract

The growing integration of artificial intelligence (AI) dialogue systems within educational and research settings highlights the importance of learning aids. Despite examination of the ethical concerns associated with these technologies, there is a noticeable gap in investigations on how these ethical issues of AI contribute to students’ over-reliance on AI dialogue systems, and how such over-reliance affects students’ cognitive abilities. Overreliance on AI occurs when users accept AI-generated recommendations without question, leading to errors in task performance in the context of decision-making. This typically arises when individuals struggle to assess the reliability of AI or how much trust to place in its suggestions. This systematic review investigates how students’ over-reliance on AI dialogue systems, particularly those embedded with generative models for academic research and learning, affects their critical cognitive capabilities including decision-making, critical thinking, and analytical reasoning. By using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, our systematic review evaluated a body of literature addressing the contributing factors and effects of such over-reliance within educational and research contexts. The comprehensive literature review spanned 14 articles retrieved from four distinguished databases: ProQuest, IEEE Xplore, ScienceDirect, and Web of Science. Our findings indicate that over-reliance stemming from ethical issues of AI impacts cognitive abilities, as individuals increasingly favor fast and optimal solutions over slow ones constrained by practicality. This tendency explains why users prefer efficient cognitive shortcuts, or heuristics, even amidst the ethical issues presented by AI technologies.

Introduction

Artificial intelligence (AI) dialogue systems, often known as conversational agents, are complex software mechanisms that emulate human dialogue, leveraging the prowess of AI, natural language processing, and machine learning technologies (Zhai & Wibowo, 2023a). Integrating generative dialogue systems in research and education has drawn considerable interest in recent years. This is because these technologies promise to revolutionize research and education by streamlining repetitive tasks, aiding in data interpretation, and pioneering new learning and assessment methods (George & Wooden, 2023; Song & Xiong, 2021; Zhai & Wibowo, 2023). However, there are concerns about the potential negative impact of its widespread use on cognitive abilities, particularly in academic writing (Liu et al., 2023). Research and education fundamentally rely on evidence, decision-making, critical thinking, and analytical thinking are crucial for thoroughly analyzing and evaluating the quality of information found in existing literature studies (Hanim et al., 2020). The importance of cultivating a mindset of cognitive abilities cannot be overstated, especially for students who are tasked with synthesizing, evaluating, and forming arguments (Kaeppel, 2021).

Few of studies have explored ethical concerns associated with AI dialogue systems, including but not limited to AI hallucinations (Gao et al., 2022), algorithmic biases (Mbalaka, 2023), plagiarism (De Angelis et al., 2023), privacy concerns (Alrazaq et al., 2023) and transparency concerns (Carvalho et al., 2019). AI hallucinations in AI dialogue systems are characterized by the generation of inaccurate or misleading information (Hatem et al., 2023). Research indicates that these ethical concerns could contribute to an over-reliance on AI dialogue systems (George & Wooden, 2023; Song & Xiong, 2021; Zhai & Wibowo, 2023), potentially impairing critical cognitive skills such as critical thinking (Dergaa et al., 2023), decision-making (Duhaylungsod & Chavez, 2023), and analytical thinking (Grassini, 2023).

A few studies have been conducted on the issues concerning over-reliance on AI dialogue systems. Gao et al. (2022) found a concerning trend where users exhibit an over-reliance on AI dialogue systems, often accepting their generated outputs, AI hallucination, without validation. This overdependence is exacerbated by cognitive biases where judgments deviate from rationality and heuristics or the use of mental shortcuts, leading to uncritical acceptance of AI-generated information. Grassini (2023) identified that algorithmic biases are frequently a result of AI systems being trained on datasets with inherent prejudices, causing users to regard these biased outputs as objective mistakenly. This misplaced trust can skew analysis and interpretation, further entrenching the issue. Xie et al. (2021) found that over-reliance on unverified AI outputs can cause misclassification and misinterpretation. The generation of such unvalidated content by AI systems poses a significant risk, potentially culminating in research misconduct, including plagiarism, fabrication, and falsification. Dempere et al. (2023) highlighted the risks associated with embedding AI dialogue systems in higher education, such as privacy violations and illegal data use. These authors caution against the normalization of intrusive data practices that might emerge from an over-reliance on AI, where the collection and analysis of student data do not fully honor privacy rights. Meanwhile, Dergaa et al. (2023) argued the importance of data transparency in AI systems within research and education.

Other studies revealed that regular utilization of dialogue systems is linked to a decline in abilities of cognitive abilities, a diminished capacity for information retention, and an increased reliance on these systems for information (Dergaa et al., 2023; Marzuki et al., 2023). This over-reliance often occurs without verifying the validity and authenticity of the provided data, especially when such information lacks proper references (Krullaars et al., 2023). Krullaars et al. (2023) argue that the over-reliance on AI dialogue systems might diminish students’ drive and commitment to learning, as they might lean too heavily on these systems for answers instead of actively participating in the learning experience. The adoption and over-reliance on AI dialogue systems have overshadowed critical ethical concerns whereby issues such as the generation of inaccurate or misleading content, algorithmic biases, plagiarism, privacy breaches, and transparency concerns have not been adequately addressed (Hua et al., 2023).

To address this research gap, we conducted a systematic literature review to investigate the contributing factors and effects of over-reliance on AI dialogue systems in research and education. The study specifically examines the contributing factors of over-reliance, such as AI hallucination, algorithmic bias, plagiarism, privacy concerns, and transparency issues, and how over-reliance impacts cognitive abilities, including decision-making, critical thinking, and analytical thinking. Additionally, this review outlines potential strategies and technological interventions to alleviate these challenges, aiming to foster responsible use of AI dialogue systems. Several research questions are formulated to address the concerns stated above:

  1. 1.

    How does over-reliance on AI dialogue systems affect critical and analytical thinking abilities in different educational subjects and levels?

  2. 2.

    What are the primary ethical concerns causing over-reliance on AI dialogue systems in research and education?

Literature review

Cognitive abilities of critical thinking, decision-making, and analytical thinking are important elements in research, particularly in higher education (Soufi & See, 2019). It involves constructing well-reasoned arguments supported by evidence (McKinley, 2013). Dwyer (2023) defines critical thinking abilities as a blend of cognitive abilities and critical thinking dispositions, emphasizing skills such as truth-seeking, systematic evaluation, inference, and self-regulation in problem-solving. Critical thinking dispositions refer to the attitudes and qualities that facilitate engagement in critical thinking activities (Facione & Facione, 1996). They include the desire to be informed, the ability to consider multiple perspectives, the identification of relationships, reflective thinking, evidence-seeking, skepticism, respect for others’ views, and tolerance. Liang (2023) highlights its importance in contemporary education, underscoring its role in fostering various competencies, including drawing conclusions, understanding contributing factors and effects, assessing source credibility, and distinguishing facts from opinions.

Decision-making abilities are critical for processing and reasoning through complex information across diverse domains, including research and education, in return nurturing proficient decision-making capabilities (Duhaylungsod & Chavez, 2023). In exploring decision-making theories, a distinction can be seen between descriptive and normative theories (Bell et al., 1988). Descriptive theories focus on understanding actual decision-making behaviors, including both rational and irrational elements, through empirical research. Normative theories, on the other hand, advocate for decisions that maximize expected utility, grounded in mathematical models and ideal behavioral principles (Damnjanović & Janković, 2014).

Analytical thinking embodies the thorough exploration and critical analysis of data, which are vital for problem-solving and informed decision-making (Pokkakillath & Suleri, 2023). These elements are crucial for enhancing learning experiences, as they pertain to reasoning, planning, inquiry, interpretation of findings, and the subsequent derivation of conclusions in research and education (Ismail, 2023).

Incorporating AI dialogue systems in research and educational settings, particularly those utilizing generative modules like variational autoencoders (VAEs), holds substantial potential for boosting creativity and elevating the quality of work (Aydin & Karaarslan, 2023). VAEs provide considerable support to writers, particularly in surmounting challenges like writer’s block or navigating complex parts of their manuscripts. This is achieved through the pioneering method of automated text generation, which not only aids in content creation but also inspires innovative thinking and problem-solving approaches (Eapen, 2023). AI dialogue systems, bolstered by interdisciplinary insights from psychology and neuroscience, is set to revolutionize the way students approach writing and decision-making, critical thinking and analytical thinking in research and education (Carvalho et al., 2019; Gao et al., 2022).

These advancements promise to enhance educational experiences by providing more interactive and personalized learning environments (Carvalho et al., 2019). However, as AI systems grow more sophisticated and their role in automated analysis expands, there is a risk that students may become overly reliant on these technologies (Krullaars et al., 2023). This over-reliance could lead to a range of issues, including diminished critical thinking (Iskender, 2023), analytical thinking (Ferrajão, 2020), and decision-making abilities (Pokkakillath & Suleri, 2023) susceptibility to AI-generated errors or AI hallucinations (Hatem et al., 2023), increased instances of plagiarism (De Angelis et al., 2023), and challenges related to lack of transparency (Carvalho et al., 2019) and algorithmic biases (Mbalaka, 2023). Moreover, habitual dependence on AI for decision-making may reduce individuals’ motivation to engage in independent thinking and analysis, potentially leading to a weakening of essential cognitive abilities (Grinschgl & Neubauer, 2022) and automation bias (Gsenger & Strle, 2021).

Given that these AI systems can handle vast data and yield accurate forecasts, there is a looming danger of humans becoming excessively reliant on AI when making choices. This over-reliance might stifle creativity and innovative thinking in both educators and learners, possibly degrading educational quality (Ahmad et al., 2023). Krullaars et al. (2023) posit that an over-reliance on dialogue systems hinders students from developing their critical thinking and problem-solving abilities.

As AI dialogue systems offer pre-formulated answers, this practice can curtail students’ freedom to convey their unique thoughts and viewpoints (Krullaars et al., 2023). Similarly, Ahmad et al. (2023) argue that one consequence of over-reliance on dialogue systems is a potential decline in user prowess of cognitive abilities. Furthermore, Gao et al. (2022) claim that students often overly rely on the source of information, leading to challenges in differentiating whether the content produced by an AI dialogue system was referenced to a credible source. This scenario of AI hallucination involves the AI creating plausible but untrue statements or assertions, which can mislead users and obscure the line between fact and fiction. This phenomenon raises important questions about users’ ability to critically evaluate and discern the accuracy of AI-generated content. Moreover, Hatem et al. (2023) discuss the issue of AI systems referencing non-existent sources, a form of confabulation that presents false information within a seemingly credible framework. This misrepresentation can deceive users and undermine the trustworthiness of the information provided by AI systems. Athaluri et al. (2023) argue that the issue of AI confabulations can potentially have adverse effects on decision-making and may lead to ethical and legal dilemmas. The authors found that among 178 reference sources produced by AI dialogue systems, 69 lacked a Digital Object Identifier (DOI), and 28 could not be found through a Google search, nor did they possess an existing DOI.

The utilization of AI tools to compose a paper in any academic or professional context constitutes plagiarism (Gao et al., 2022). Studies show that the increasing prevalence of journals lacking rigorous quality controls has led to concerns about the potential surge of AI-generated articles in the scientific community, with notable plagiarism detection tools failing to identify infringements effectively (Dehouche, 2021; Francke & Bennett, 2019). De Angelis et al. (2023) found that the rise of journals that neglect essential quality controls, like verifying for plagiarism or ensuring ethical standards, might result in a significant influx of AI-generated articles within the scientific realm. Such a trend could gravely undermine the credibility of scientific studies and tarnish the prestige of scholarly publications. Khalil and Er (2023) have reported that popular plagiarism detection tools, such as Turnitin and iThenticate, show a significant limitation in identifying plagiarism in essays that are based on existing literature. Concerningly, their studies reveal that these tools could only detect plagiarism in less than 15% of cases. This low detection rate raises serious concerns about the potential misuse of these platforms by students for academic purposes. It also highlights a critical issue regarding the lack of transparency in the algorithms that drive these plagiarism detection procedures (Ventayen, 2023).

The issue of a lack of transparency in algorithm-driven procedures in the realm of social-legal contributions, often referred to as "black-boxing" (Carvalho et al., 2019). According to Carvalho et al. (2019), such transparency is pivotal for enhancing understanding of AI dialogue systems, laying a solid foundation for the challenges and advantages of algorithmic decisions, and ensuring that decision-making processes are both accountable and fair. Additionally, it is crucial to scrutinize how information is accessed online, especially on digital platforms. Beck (2019) explored the intricate dimensions of media ethics, identifying similarities to online communication ethics. The core of this discourse revolves around practices valuing truth. This includes shedding light on the decision-making behind content selection, validating the genuineness of content, understanding authorship, and pinpointing deliberate misinformation, including algorithmic biases.

Algorithmic biases refer to the unintended and systematic discrimination present in computer algorithms (Alrazaq et al., 2023). Often, these biases stem from historical data sources upon which the software relies, potentially reflecting or amplifying past prejudices. Remarkably, even when explicit sensitive attributes are omitted from the input, a proficient machine-learning algorithm might still act upon these attributes due to underlying correlations in the data (Kordzadeh & Ghasemaghaei, 2022). Mbalaka (2023) found that the DALL-E 2 struggled notably in generating detailed images of "An African Family" compared to more generic "Family" images. In contrast, StarryAI outshined DALL-E 2 by producing clearer facial features. However, it still lacked accuracy in depicting the cultural nuances. Feine et al. (2020) argue that AI dialogue system designs frequently incorporate gender-specific indicators. Many of these AI dialogue systems are characterized, overtly or subtly, by a particular gender. Notably, many AI dialogue systems bear female names, present female-centric avatars, and are often referred to as female. The study found a prevailing preference for female representations over male ones. This highlights an inherent gender bias in AI dialogue system design practices.

With issues such as AI hallucinations, plagiarism, lack of transparency, and algorithmic biases, there arises a critical concern that over-reliance on AI dialogue systems’ decision-making capabilities might potentially impede the cultivation of critical thinking skills (Carobene et al., 2023; Hosseini et al., 2023). Students can inadvertently become overly dependent on AI-generated assistance, potentially detracting from their ability to make independent, well-informed decisions (Buçinca et al., 2021). Yet, in the academic realm, particularly among junior faculty members, there exists the perpetual challenge of balancing research, publishing commitments, and teaching responsibilities (Holmes et al., 2023). Institutions often require a specific quota of research articles to be published annually for career advancement (Sharma, 2020). Additionally, the ever-dreaded ‘writer’s block’ poses a formidable obstacle, affecting both novice and experienced writers, including students and educators (Köbis & Mossink, 2021). AI-generated text emerges as a valuable resource to surmount these hurdles, serving as an effective tool to overcome writer’s block and streamline the publishing process (Washington, 2023).

There is still a noticeable gap in the current literature to explore the effects of over-reliance on dialogue systems on abilities such as decision-making, critical thinking, and analytical thinking in education and research (Ahmad et al., 2023). To fill this knowledge gap, a systemic review is conducted with a specific emphasis on research examining the implications of over-reliance on AI dialogue systems. This study aims to investigate the over-reliance on AI dialogue systems in educational and research contexts, with a particular focus on their impact on decision-making, critical thinking, and analytical thinking facilitated through the use of dialogue systems.

Rationale for the study

With the rapid advancement of AI dialogue systems, the landscape of research and education has been significantly transformed. These systems, particularly those equipped with generative modules, have been successfully used to expedite data analysis and streamline the research process, thereby enhancing the quality and efficiency of academic endeavors. The acclaim surrounding the benefits of AI dialogue systems, especially among practitioners and students, is undeniable, highlighting their pivotal role in facilitating access to information and simplifying complex research tasks.

The necessity to explore the impact of over-reliance on AI systems on students' cognitive capabilities and to identify the challenges associated with this dependency is underscored by the transformative effects these technologies have had on the research and educational landscapes. While AI dialogue systems, particularly those with generative capabilities, have revolutionized the way information is accessed and complex research tasks are simplified, they also introduce a range of ethical concerns that have yet to be fully examined. This research primarily draws data from the context of higher education, thereby not considering the cognitive developmental differences across various age groups, such as teenagers and primary or secondary school students. These younger cohorts are typically not included in independent research studies, yet their cognitive abilities and learning processes could be significantly impacted by AI dialogue systems in ways distinct from those observed in older students. Therefore, a more inclusive approach considering different educational levels and age groups would provide a more robust and comprehensive analysis of the effects of AI dialogue systems.

The adoption and over-reliance on AI dialogue systems have overshadowed critical ethical concerns. Issues such as the generation of inaccurate or misleading content, algorithmic biases, plagiarism, privacy breaches, and transparency concerns have not been adequately addressed (Hua et al., 2023). The tendency among users, including students and researchers, to overlook or minimize these ethical challenges is concerning. There exists a substantial gap in the academic discourse regarding the long-term implications of such over-reliance on AI systems for essential cognitive skills like decision-making, critical thinking, and analytical thinking.

The existing literature, while acknowledging these ethical concerns, lacks a comprehensive analysis of their impacts or offering strategies for mitigating these risks. This oversight is alarming, given the potential for AI dialogue systems to inadvertently weaken users' cognitive abilities by fostering an environment of dependency and uncritical acceptance of generated content.

Therefore, the rationale for this study stems from the urgent need to delve into the ethical quandaries posed by AI dialogue systems within educational settings. It aims to provide a nuanced understanding of how these systems influence users' cognitive skills and to develop a framework for navigating the ethical pitfalls associated with their use. By addressing this research gap, the study endeavors to ensure that the integration of AI dialogue systems into educational and research practices is conducted in a manner that is both ethically sound and cognitively enriching.

Systematic review method

This section aligns with the systematic review guidelines recommended by Montenegro-Rueda et al. (2023), offering insights into over-reliance on AI systems for essential cognitive abilities like decision-making, critical thinking, and analytical thinking in research and education. Systematic reviews are defined as methodical syntheses of knowledge that address specific exploratory research questions through the careful selection, identification, and integration of existing data. Such reviews are instrumental in charting the expanse of the literature landscape, pinpointing areas lacking in research, formulating research aims, and delivering evidence-based recommendations to policymakers (Tricco et al., 2018).

This study aims to investigate the over-reliance on AI dialogue systems in educational and research contexts, with a particular focus on their impact on decision-making, critical thinking and analytical thinking facilitated through the use of dialogue systems. To achieve this, the study adopts the comprehensive five-step methodology for conducting systematic literature reviews proposed by Macdonald et al. (2023). This methodology facilitates an exhaustive literature search and the critical assessment and synthesis of relevant articles from academic databases. The process involves (a) defining the review's scope, (b) executing a thorough literature search, (c) selecting the final set of articles, (d) analyzing the chosen articles through content analysis, and (e) reporting the findings. Through this structured approach, the study aims to provide a nuanced understanding of how these systems influence users' cognitive skills and to develop a framework for navigating the ethical pitfalls associated with their use.

Determining the scope of a review

This initial stage involves establishing inclusion and exclusion criteria for selecting relevant sources, as well as the criteria for identifying and retrieving relevant literature. For inclusion in the review, journal articles must meet the following selection criteria: (a) publication in English as a full-text article, (b) relevance to AI dialogue systems incorporating a generative module for research and education, (c) emphasis on ethical issues related to AI dialogue systems, and (d) publication date ranging from 2017 to 2023. Conversely, articles are excluded if they (a) lack focus on AI dialogue systems with generative modules for research and education, (b) are written in languages other than English, (d) fall into the categories of editorials or opinion pieces, and (f) are dissertations. These criteria for inclusion and exclusion are outlined in Table 1.

Table 1 Inclusion and exclusion

Conducting a literature search

The second phase involves conducting the search query across selected databases to compile the search findings. Key databases such as ProQuest, IEEE Xplore, ScienceDirect, and Web of Science were utilized for the systematic review. These databases were specifically chosen for their relevance to educational research and artificial intelligence, with the aim of enhancing the thoroughness of the review.

The review process was meticulously structured and executed in sequential steps. Initially, to ensure the incorporation of the most recent and relevant literature, the authors confined the selection of publications to a specific timeframe, from 2017 to 2023. This time frame was selected specifically to concentrate on the recent breakthroughs in AI, especially in the realm of transformer models, introduced by the “Attention Is All You Need”(Vaswani et al., 2017), and their integration with generative modules. This technology has undergone substantial advancements since 2017 (Montenegro-Rueda et al., 2023). Furthermore, prior to 2017, AI technology had not achieved the level of sophistication and performance that transformer models have enabled, marking a pivotal shift in artificial intelligence (Zhai & Wibowo, 2023b).

Following this initial step, the process entailed the removal of redundant entries, effectively reducing the initial collection of studies to 70. The next stage broadened the inclusion criteria to cover both peer-reviewed journal articles and conference proceedings. This expansion was balanced by a deliberate exclusion of trade publications, editorials, books, and review articles to prioritize original research contributions. Further refinement of the criteria led to the selection of articles written in English that specifically addressed AI dialogue systems with generative modules, which brought the number of potential papers down to 35.

A detailed examination of the titles and abstracts of these papers ensued, aimed at determining their relevance to the deployment of AI dialogue systems within research and educational frameworks. This careful evaluation resulted in the selection of 14 studies that were identified as pertinent to further analysis in the final review phase. The criteria for both inclusion and exclusion, as well as the tally of studies that progressed through each stage of the selection process, are meticulously documented in Fig. 2, providing a transparent and comprehensive view of the methodological approach and its resulting dataset.

Choosing the final samples

The search strategy commenced with four search terms, each tailored to capture various facets of AI dialogue systems equipped with generative modules. These sets encompassed specific search keywords, such as "AI chatbot with a generative module" and "AI conversational agent with a generative module," “misleading information,” “biases,” “algorithmic biases,” “plagiarism,” “privacy concerns,” “privacy issues,” “transparency concerns,” “transparency concerns,” “decision-making,” “critical thinking,” “critical reasoning,” “analytical thinking,” and “analytical reasoning.” The fourth collection of terms was introduced to probe cognitive abilities, emphasizing critical thinking, analytical reasoning, and decision-making skills within research and education. This inclusion expanded the search's scope to encompass studies that explore the enhancement or assessment of cognitive processes using AI dialogue systems.

Throughout this search process, 14 publications were identified, each featuring all the specified search terms in its title or abstract. This comprehensive approach ensured the inclusion of relevant literature that spans the technical, educational, and ethical dimensions of AI dialogue systems with generative modules. Figure 1 illustrates the PRISMA flowchart.

Fig. 1
figure 1

The PRISMA flowchart

Evaluating the samples using content analysis

In alignment with the defined research goals, a selection of 14 articles underwent detailed examination and evaluation through a systematic review process. This approach involved collecting, processing, identifying, and summarizing data to uncover key insights. The methodology was structured around a six-step procedure aimed at identifying recurring themes and dimensions within the literature.

The initial phase was dedicated to conducting a thematic analysis, which facilitated a deeper comprehension of the data. Following this, preliminary codes were established to categorize the findings systematically. The subsequent third and fourth steps involved the identification of sub-dimensions and a thorough review of these sub-dimensions, respectively, to refine the analysis further. The fifth step entailed aggregating all relevant concepts to form a coherent framework of understanding. The final step focused on analyzing the data to ascertain its direct relevance and contribution to the study's objectives.

To guarantee the relevance and integrity of the review, the literature sourced from the databases underwent a dual screening process in accordance with the PRISMA guidelines. PRISMA, which stands for Preferred Reporting Items for Systematic Reviews and Meta-Analyses, provides a foundational set of standards for reporting evidence-based systematic reviews and meta-analyses. This protocol includes a 27-item checklist and a four-phase flow diagram, offering a structured approach to ensuring the quality and comprehensiveness of the review (Sarkis-Onofre et al., 2021).

Method of dimensional analysis

In our analysis, we adhered to the three-step method outlined by Kools et al.’s (1996), a seminal framework for applying dimensional analysis. Initially, we identified and generated dimensions along with their characteristics, breaking these down into subcategories to reveal preliminary concepts through data expansion. In this initial phase, our focus was particularly on the identification of codes that embodied the contributing factors and effects of AI dialogue systems embedded with generative modules in research and education. We iterated the process until a substantial array of dimensions and properties was established.

Following this, we constructed an explanatory matrix, assigning varying degrees of importance to different attributes, a process akin to the constant comparison method used in Grounded Theory. This step involved elevating each dimension to a status that allowed for the identification of a central perspective. The dimension offering the most comprehensive explanation of the interrelations among dimensions was designated as the central or key perspective, serving as the organizational foundation for the data. This hierarchical structuring of dimensions into categories such as salient, relevant, marginal, or irrelevant is a critical aspect of the dimensional analysis as described by Kools et al. (1996). Lastly, leveraging the central perspective as a foundational viewpoint, we explored the patterns and interactions among the various aspects of the phenomenon. By employing an explanatory matrix, this approach facilitated a thorough elucidation of the involved elements, thereby uncovering the intricate dynamics at play. This comprehensive method allowed us to delve deeply into the dimensions of contributing factors (ethical issues of AI) and effects (cognitive abilities) of over-reliance on AI dialogue systems embedded with generative modules.

Result

This section summarizes the findings from the 14 articles investigated and addresses the ethical issues associated with the use of AI dialogue systems. Studies have explored AI dialogue systems with the integration of generative modules, which demonstrate the capability to generate new data akin to existing datasets for various applications (Carvalho et al., 2019; Gao et al., 2022). For instance, they are instrumental in detecting defects, extracting pertinent features from sensor data for predictive maintenance, evaluating uncertainty in construction projects for risk assessment, and innovating in the design of buildings and infrastructure (Eapen, 2023). The incorporation of AI in literary studies has opened new pathways for understanding and interpreting literary narratives (Ahmad et al., 2023; Krullaars et al., 2023). However, this advancement is not without its challenges. Ethical concerns include generating false information (hallucination) (Gao et al., 2022), algorithmic bias (Alrazaq et al., 2023), plagiarism (De Angelis et al., 2023), privacy concerns (Dempere et al., 2023) and transparency concerns (Dergaa et al., 2023). Cognitive abilities include decision-making, critical thinking (Soufi & See, 2019) and analytical thinking (Pokkakillath & Suleri, 2023) have been identified as areas of concern in current AI applications. These capabilities are pivotal in understanding the complex interactions between AI technologies and human cognitive processes. Table 2 summarizes the findings of AI dialogue systems in research and education relating to cognitive abilities and ethical issues. The next section answers the first research question: How does over-reliance on AI dialogue systems affect critical and analytical thinking abilities in different educational subjects and levels?

Table 2 Findings of AI dialogue systems in research and education relating cognitive abilities and ethical issues

Question 1: How does over-reliance on AI dialogue systems affect critical and analytical thinking abilities in different educational subjects and levels?

Findings show that integrating AI dialogue systems in different educational subjects, such as various academic writing from college and higher education, has a dual impact on students' cognitive abilities. While these technologies can enhance writing proficiency, boost self-confidence, and streamline research tasks, they also introduce risks such as diminished creativity, over-reliance, and ethical concerns like plagiarism and data bias. Studies highlight that although AI tools can aid decision-making and improve efficiency, they often lead to reduced critical and analytical thinking skills, especially when students become overly dependent on AI-generated content.

Cognitive abilities

Research exploring decision-making, critical, and analytical thinking abilities has uncovered a complex impact of AI tools on academic performance. While these technologies can enhance writing proficiency and boost self-confidence, they introduce risks to originality, critical thinking, and adherence to ethical standards, including plagiarism concerns. Furthermore, findings indicate that over-reliance on AI dialogue systems may lead to diminished creativity, an increase in dependency, and challenges in understanding (Duhaylungsod & Chavez, 2023; Kim et al., 2023; Semrl et al., 2023). Figure 2 shows these three critical cognitive abilities.

Fig. 2
figure 2

Three key cognitive abilities

Decision-making abilities

Three studies collectively highlighted the benefits and challenges of using AI dialogue systems in academic settings, emphasizing task efficiency alongside concerns about dependency, comprehension, originality, and data bias. Duhaylungsod and Chavez (2023) investigated 16 college students’ interactions with AI dialogue systems for academic tasks. The results indicated that AI dialogue systems efficiently decreased the time dedicated to research and information retrieval. This over-reliance fostered complacency and an undue dependency on AI dialogue systems.

Moreover, concerns regarding plagiarism, decreased creativity, data bias, security issues, and potential discrimination have also emerged. Kim et al. (2023) investigated the challenges English as a Foreign Language (EFL) learners face when employing AI dialogue systems for text paraphrasing. The study involved 15 individuals who are non-native English speakers. It reveals that the main difficulty arises from the lack of comprehensive explanations accompanying AI-generated paraphrases. This deficiency makes it challenging for learners to grasp the context and verify the accuracy of the reformulated content. Furthermore, the study highlights the issue of data bias: when explanations are overly simplified, it may result in an increased reliance on AI. Consequently, this undermines learners’ ability to analyze and grasp the information independently, impairing their decision-making skills.

Semrl et al. (2023) examined the feasibility of dialogue systems in addressing scientific questions and assisting academic writing. The findings show that AI dialogue systems are a promising tool for assisting in the writing of scientific papers. However, their lack of originality, the tendency for excessive text, and the use of nuanced and vaguer language could suggest that a paper is produced by AI rather than a human author. Additional challenges identified in the study include limited creativity, data bias issues, AI hallucinations (inaccurate or misleading information generated by the AI), and concerns regarding transparency in the AI’s decision-making processes.

Overreliance on AI dialogue systems can significantly impact decision making, critical and analytical thinking abilities by fostering dependency and potentially diminishing individual judgment skills. When individuals rely heavily on AI for problem-solving or decision-making, they may become less inclined to engage in independent, critical information analysis, decreasing their ability to judge between AI-generated and human-generated insights.

Critical thinking abilities

Eight studies examined the integration of AI in academic writing, revealing its positive impact on skills and efficiency while also highlighting significant concerns about creativity, critical thinking, and ethical concerns such as plagiarism and algorithmic bias. For example, Malik et al. (2023) investigated how students perceive the integration of AI in the process of writing academic essays. The research employed a case study design and enlisted the participation of 245 undergraduate students representing 25 tertiary institutions across Eastern and Central Indonesian provinces. The study’s findings demonstrated that AI had a positive influence on students’ writing skills, self-confidence, and their grasp of academic integrity principles. However, some students raised concerns about potential repercussions for creativity, critical thinking, and ethical writing practices. The study reports the potential reduction in critical thinking skills when depending on AI (75%), the risk of excessive reliance on technology (73%), and the prevalence of misinformation and inaccuracies (70%). Furthermore, there is substantial apprehension regarding the ethical implications of unintentional plagiarism (69%) and algorithmic biases (40%).

Marzuki et al. (2023) investigated the range of available AI writing tools and evaluated their influence on student writing, particularly concerning content and organization, as perceived by English as a Foreign Language (EFL) teachers. Employing a qualitative approach, the research was conducted as a case study. Data collection involved semi-structured interviews, focusing on gathering insights into the variety of AI writing tools and their effects on the quality of students’ writing. The findings of this study suggest that the integration of AI writing tools can be advantageous in enhancing the quality of EFL student writing. The study also reported ethical concerns related to plagiarism, hallucination, and algorithm bias.

Dialogue systems offer efficiency gains in academic writing, yet there is a cautionary note against potential overdependence, which might impede the development of critical thinking and writing skills within the academic community. Dergaa et al. (2023) explore the potential advantages and drawbacks of ChatGPT and other Natural Language Processing (NLP) technologies in academic writing and research publications and the influence they might exert on the authenticity and trustworthiness of academic work. The study found that ChatGPT possesses the capacity to augment the efficiency of academic writing and research. The study places particular emphasis on upholding ethical and academic principles, with human intelligence and critical thinking serving as pivotal elements in the research process.

The dual nature of AI in academic writing is evident: while it offers significant benefits in enhancing writing skills and efficiency, it also presents concerns regarding over-reliance, reduced originality, and potential ethical challenges, such as plagiarism and biases, within higher education. Santiago Jr et al. (2023) delved into text mining techniques to extract patterns and trends related to the utilization of writing assistance tools in research. The data for this analysis is derived from the responses of 327 faculty researchers from various higher learning institutions in the Philippines. The introduction of these tools in specific higher education institutions may pose challenges, including concerns about over-reliance, which could potentially impede the development of critical thinking and writing skills among students and researchers. The study found that AI dialogue systems may pose challenges, including concerns about over-reliance, which could potentially impede the development of critical thinking and writing skills among students and researchers. Koos and Wachsmann (2023) delved into the impact and implications of AI-driven language systems, such as ChatGPT/GPT-4, on academic paper writing within the context of universities and other higher educational institutions. The findings underscore the positive role of ChatGPT/GPT-4 in assisting students and researchers in streamlining the writing process, overcoming language barriers, and enhancing overall productivity. Also, the study showed that AI-generated content, including concerns related to plagiarism, the potential erosion of critical thinking skills, and a potential reduction in creativity within the realm of academic writing.

Analytical thinking abilities

Three studies examined the use of AI dialogue systems and generative models in education and research, highlighting their potential to enhance research writing, customize learning, and provide 24/7 feedback, but caution against overdependence which may erode analytical and critical thinking skills, writing proficiency, and understanding of plagiarism, potentially fostering academic dishonesty. For example, Abd-Alrazaq et al. (2023) explored text mining methods to uncover patterns and trends in the use of dialogue systems of generative models for research, based on the feedback of 327 faculty researchers from diverse higher education institutions in the Philippines. The results showed that faculty participants appreciate the benefits of dialogue systems in bolstering research writing by streamlining workflows and enhancing clarity.

However, the adoption of these tools in certain higher education settings may face challenges, including potential problems like overdependence, which could impede the development of essential skills such as analytical and critical thinking, writing proficiency, and the understanding of plagiarism among students and researchers. Pokkakillath and Suleri (2023) evaluated the impact of dialogue systems embedded with generative models, such as ChatGPT, on research and educational sectors. The results show that these dialogue systems hold the potential to transform the educational environment by providing immediate feedback, customizing learning experiences to meet individual requirements, and ensuring availability 24/7. However, the authors argue that when students become overly dependent on dialogue systems equipped with generative capabilities for completing assignments or generating creative work without applying their analytical thinking and decision-making skills, such reliance could diminish their ability for independent thought and analytical reasoning. Grassini (2023) examines the effectiveness of the AI models within research and education domains. The findings reveal that these models’ text-generation abilities can mimic human writing. However, critics argue that reliance on AI could diminish students’ critical thinking and analytical capabilities, as well as potentially foster academic dishonesty.

Question 2: What are the primary ethical concerns causing over-reliance on AI dialogue systems in research and education?

The systematic review found that the largely ethical concerns leading to an over-reliance on AI dialogue systems within research and education are primarily driven by ethical concerns. These issues encompass the generation of misleading or fabricated information by AI (often referred to as "hallucination"), algorithmic bias, which can perpetuate existing inequalities, plagiarism concerns that challenge academic integrity, privacy issues related to the handling of sensitive information, and a lack of transparency in how AI systems make decisions and process data.

Over-reliance on AI

Four studies examined the complex implications of integrating AI tools into educational settings. Abd-Alrazaq et al. (2023) cautioned the propensity of generative AI tools to fabricate so-called facts and generate false information convincingly. This may lead users to place undue trust in these technologies, escalating the risk of dependency. Such over-reliance could impede the cultivation of essential skills in medical students, including critical thinking, problem-solving, and effective communication. The convenience offered by AI tools in providing quick answers might deter students from engaging in thorough research and forming their insights, challenging the integration of these tools in ways that enhance rather than diminish critical faculties and problem-solving abilities.

Duhaylungsod and Chavez (2023) argued that over-reliance on AI could reduce students’ skills of creativity and innovation. The authors believed that they were overly reliant on technology for information, potentially undermining their capacity for independent critical thinking and problem-solving.

Koos and Wachsmann (2023) discussed the detrimental effects of over-reliance on the AI dialogue system, noting it may compromise the development of students’ critical thinking and problem-solving skills. They argue that if students lean too heavily on AI for content generation, they risk not developing the ability to analyze information, construct logical arguments, or integrate knowledge from diverse sources for academic and professional success. Santiago Jr et al. (2023) reported mixed reactions from users and faculty regarding the use of AI tools. While some appreciate these tools' support in enhancing writing skills, there is a prevailing concern about potential overdependence leading to reduced effort in crafting well-structured sentences and adhering to proper grammar and spelling. The faculty fear that such reliance could compromise the development of essential research and writing skills. Overusing AI tools might also weaken the practice of evaluating information sources critically, cross-referencing data, and cultivating a deep understanding of research topics, ultimately affecting the ability for independent analysis and interpretation.

Ethical concerns

Studies highlight ethical issues surrounding AI dialogue systems in research and education. Scholars underline the potential and limitations of AI in generating scientific content and specialized reasoning, with concerns over AI's ability to produce credible references, the risks of hallucination in various contexts, and limited mechanistic reasoning capabilities (Lee et al., 2023). Moreover, studies identified algorithmic bias as a significant issue, primarily attributed to the datasets used for training, with Large Language Models like GPT-4 potentially reinforcing social biases and stereotypes (Grassini, 2023). Privacy concerns were also a focal point, with studies indicating that AI systems could inadvertently disclose personal information (Abd-Alrazaq et al., 2023). EFL students face challenges related to breaches of academic integrity (Duhaylungsod & Chavez, 2023). Finally, transparency concerns were raised due to the lack of clarity on AI data sources (Dergaa et al., 2023). Figure 3 outlines these five ethical issues of AI identified through the course of this study, highlighting the critical areas of concern in the intersection of AI and literary analysis.

Fig. 3
figure 3

Five ethical issues of AI identified in this study

AI hallucination

Three studies critically examined the capabilities of AI dialogue systems in generating scientific content and their proficiency in specialized reasoning tasks, highlighting both the potential and the limitations of AI in academic and professional domains. In Gao et al.’s (2022) study, the authors utilized 50 abstracts sourced from five scientific journals and assigned ChatGPT to create abstracts based on their titles. Subsequently, both sets of abstracts underwent scrutiny by AI plagiarism detection software and impartial human assessors. The results revealed that out of the abstracts generated by ChatGPT, 68% were correctly identified as such (true positives), while 14% of authentic abstracts were mistakenly categorized as chatbot-generated (false positives). These findings indicate that AI dialogue systems’ capacity to generate credible references for research topics might be constrained by the presence of DOIs and the accessibility of online articles.

Lee et al. (2023) argue that in medical contexts, this hallucination can be especially risky because they might be nuanced, and the chatbot often delivers them in a persuasive way that can lead the user to believe its accuracy. Watts et al. (2023) evaluated the mechanistic reasoning of three dialogue systems (ChatGPT-3.5, ChatGPT-4, and Bard). The findings revealed that chatbot responses either underperform or reason on par with students and occasionally provide inaccurate answers to content questions. Further, the results suggest that dialogue systems exhibit limited mechanistic reasoning capabilities compared to students. This limitation is attributed to the infrequent discussion of electron movement, a central feature of mechanistic reasoning. Electron movement is pivotal as it explicitly signifies reasoning at a scalar level, representing the primary phenomenon of interest and, thus, is a key component of mechanistic reasoning.

Algorithmic bias

Ten studies identified instances of algorithmic bias within their research. The majority attributed these biases to the datasets used for training the algorithms. For example, Abd-Alrazaq et al. (2023) highlighted that recent Large Language Models (LLMs), such as GPT-4, are developed using extensive datasets from various internet sources, including websites, books, news articles, scientific publications, and movie subtitles. However, this extensive data collection process does not preclude including biased or unrepresentative information within these models. OpenAI has recognized the possibility that GPT-4, similar to its predecessors, may produce responses that inadvertently reinforce existing social biases and stereotypes. This issue is of particular concern in scenarios where an LLM is trained on data that disproportionately focuses on disease prevalence within specific ethnic groups, potentially leading to biased outputs in essays, exams, and clinical case scenarios.

Grassini (2023) found that the risk of bias stems from predominantly training datasets. This is particularly evident in scenarios where models, such as those used for essay evaluation, are trained on data that mainly represents a single demographic, leading to a potential bias against essays authored by individuals from other demographics. The root contributing factors of these biases are multifaceted, often stemming from an overemphasis on research and educational materials sourced from economically advanced countries, or from textbooks that neglect a comprehensive global perspective. The author stated that there have been observations that ChatGPT’s responses can exhibit biases related to politics, religion, race, gender, and equity.

Similarly, Kim et al. (2023) argued that the choices design made in the development of explanation interfaces for AI tools can inadvertently contain biases. Their study (Kim et al., 2023) focuses on difficulties encountered by EFL learners when using AI dialogue systems to paraphrase texts. The way an explanation is visualized or conveyed to users may significantly impact their decision-making processes. Such design nuances could skew users’ perceptions, leading to outcomes in text that are inadvertently biased. The researchers further contend that when users engage in collaborative writing with AI, the interaction can subtly influence their perspectives, potentially shaping the narrative or substance of the collaborative text. This phenomenon suggests that the way AI explanations and interactions are structured and presented could have profound implications on the objectivity and inclusiveness of the content generated through such partnerships.

Plagiarism

Three studies explored the academic challenges faced by students and the role of AI dialogue systems in creating a more inclusive educational environment, amid broader concerns about plagiarism and the integrity of academic publications. Kim et al. (2023) argue that a significant challenge for higher education students from non-English speaking backgrounds is language barriers, which can impede their academic progress. This can lead to feelings of exclusion or fear of missing out and increase the risk of academic integrity breaches, like unintentional plagiarism. AI dialogue systems like ChatGPT, with capabilities such as language editing, translation, adaptive learning from human prompts, and swift responses, might offer a more equitable academic environment for these students. Santiago Jr et al. (2023) found that the rise of journals that neglect essential quality controls, like verifying for plagiarism or ensuring ethical standards, might result in a significant influx of AI-generated articles within the scientific realm. Such a trend could gravely undermine the credibility of scientific studies and tarnish the prestige of scholarly publications. Dergaa et al. (2023) claimed that renowned plagiarism detection tools like Turnitin were largely unsuccessful in spotting plagiarism in essays based on existing literature. Alarmingly, these tools detected plagiarism in fewer than 15% of cases, raising fears about students potentially leveraging the platform for academic tasks.

Privacy concerns

Eight studies briefly mentioned privacy concerns and underscored the importance of addressing these issues, yet only two of them conducted an in-depth exploration into the matter of privacy concerns. For example, Abd-Alrazaq et al. (2023) found that LLMS can lead to the disclosure of personal information by students and educators, such as names, email addresses, phone numbers, prompts, uploaded images, and images generated by the AI. OpenAI might utilize this personal information for a variety of purposes. These include service analysis, maintenance, enhancement, research activities, fraud prevention, compliance with legal obligations, and potentially sharing this data with third parties without additional notice or consent from users.

Abd-Alrazaq et al. (2023) examined how AI generative modules are revolutionizing medical curriculum development, teaching methodologies, personalized study plans, learning materials, and student assessments. The authors emphasize the significance of proper citation and attribution in academic settings, such as medical schools, and to educate on navigating the challenges associated with user privacy, copyright concerns, misinformation, and biases. The authors identified several key challenges that need to be addressed in the educational use of AI, including academic dishonesty, misinformation, privacy concerns, copyright issues, dependency on AI, algorithmic bias, the need for consistency and human interaction, and disparities in access. These are crucial areas where awareness and guidelines must be developed to ensure the ethical and effective use of AI technologies in educational contexts.

Dempere et al. (2023) posited that integrating AI dialogue systems into higher education systems comes with various risks, including privacy concerns, illegal usage of data, false information, cognitive biases, diminished human interaction, restricted access, and unethical data collection practices. The authors warned of the dangers associated with adopting AI technologies in academic settings, particularly emphasizing the potential continuation of existing systemic biases and discrimination. They highlight the risk of reinforcing inequalities for students from historically underserved and marginalized communities, exacerbating racism, sexism, xenophobia, and other forms of prejudice and injustice. Furthermore, the authors also cautioned against the deployment of AI systems that can monitor and analyze students’ thoughts and ideas, warning of the creation of surveillance mechanisms that could infringe upon student privacy.

Transparency concerns

Nine studies briefly address transparency concerns within dialogue systems; however, only a few undertake in-depth analysis. For example, Dergaa et al. (2023) argue that biases and inaccuracies in AI systems can arise from a lack of transparency in the training datasets. They stress the importance of educating students on the ethical use of dialogue systems and advocating for principles of honesty, integrity, and transparency. Furthermore, they recommend establishing fundamental guidelines for interacting with these systems. Koos and Wachsmann (2023) stated that the lack of transparency of ChatGPT’s data sources in text generation poses a critical challenge, especially in academic contexts where proper citation and attribution are fundamental to uphold integrity and prevent plagiarism. The authors argue that the lack of transparency regarding the content generated by ChatGPT complicates the accurate attribution of specific ideas or concepts to their original authors, potentially compromising academic integrity.

Discussion

The systematic review of this study shows that most AI dialogue systems featuring the integration of generative models have shown these systems’ capacity to augment the efficiency of academic writing and research (Carvalho et al., 2019; Gao et al., 2022). These technologies reduce the time spent on research and information retrieval (Eapen, 2023). Furthermore, the adoption of AI in the realm of literary studies has forged novel avenues for the analysis and interpretation of literary narratives (Ahmad et al., 2023; Krullaars et al., 2023). Despite these advancements, the integration of AI presents several challenges. Ethical concerns have been raised regarding the potential for AI hallucinations (Gao et al., 2022), algorithmic bias (Alrazaq et al., 2023), plagiarism (De Angelis et al., 2023), privacy issues (Dempere et al., 2023), transparency concerns (Dergaa et al., 2023), and over-reliance (Koos & Wachsmann, 2023). Moreover, the impact of AI on cognitive abilities, including decision-making, critical thinking (El Soufi & See, 2019), and analytical thinking (Pokkakillath & Suleri, 2023), remains a significant area of concern in the deployment of AI technologies.

Over-reliance on AI

The findings on over-reliance on AI dialogue systems collectively emphasize significant concerns regarding the over-reliance on AI dialogue systems in educational settings. Alrazaq et al. (2023) highlight the risk of AI tools generating convincingly false information, leading to undue trust in these technologies and impeding the development of critical thinking, problem-solving, and effective communication skills. This dependency is problematic as it can deter students from engaging in thorough research and forming their insights, potentially diminishing their critical faculties. Duhaylungsod and Chavez (2023) argue the ease of access to AI tools can lead to complacency, reducing students' creativity and innovation. Their research suggests that an over-reliance on AI can undermine students' capacity for independent critical thinking and problem-solving, as they may become too dependent on AI-generated information. This concern is echoed by Koos and Wachsmann (2023), who discuss the detrimental effects of over-reliance on AI dialogue systems. They caution that students leaning too heavily on AI for content generation risk not developing essential skills such as analyzing information, constructing logical arguments, and integrating diverse knowledge and skills that are crucial for both academic and professional success. Santiago Jr et al. (2023) provide a nuanced view, reporting mixed reactions from users and faculty regarding the use of AI tools. While some users appreciate the enhancement in writing skills facilitated by AI tools, there is a prevailing concern about potential overdependence. This overdependence could lead to reduced effort in crafting well-structured sentences, adhering to proper grammar and spelling, and critically evaluating information sources. Such a trend might ultimately weaken students' ability to perform independent analysis and interpretation, thereby compromising the development of essential research and writing skills.

Decision-making abilities

Three studies collectively highlighted the benefits and challenges of using AI dialogue systems in academic settings, emphasizing task efficiency alongside concerns about dependency, comprehension, originality, and data bias. The concern of overreliance on AI in the research and educational sector is growing, particularly regarding the diminishment of academic writing decision-making capabilities and the tendency to encourage academic laziness (Sabharwal et al., 2023). A study employing qualitative methods and the partial least squares (PLS)-Smart for data analysis gathered primary data from 285 students across various universities in Pakistan and China. This study revealed that overreliance on AI dialogue systems embedded with generative models resulted in 68.9% of students exhibiting increased laziness and 27.7% experiencing a degradation in decision-making abilities, attributable to AI’s influence in Pakistani and Chinese societies. The authors observed a progressive decline in decision-making capabilities throughout the duration of the study, which was attributed to the integration of generative functions within the AI dialogue system (Ahmad et al., 2023).

Decision-making abilities are critical for processing and reasoning through complex information across diverse domains, including research and education, in return nurturing proficient decision-making capabilities (Duhaylungsod & Chavez, 2023). Morelli et al. (2022) detailed the decision-making process through distinct phases: stimuli presentation predicting outcomes, evaluation of options and formation of preferences, action selection in response to stimuli, and evaluation of those actions, offering a thorough perspective on decision-making structure. Further research underscores the (Bankins et al., 2022; Kim et al., 2023; Semrl et al., 2023). The continuous replacement of human roles in decision-making by AI undermines important mental practices such as decision-making, critical thinking and analytical thinking (Jain et al., 2023).

In exploring decision-making theories, a distinction can be seen between descriptive and normative theories (Bell et al., 1988). Descriptive theories focus on understanding actual decision-making behaviors, including both rational and irrational elements, through empirical research. They highlight how deviations from rational choice theory, influenced by cognitive biases (deviation from norm or rationality in judgment) and heuristics (simplified strategies or mental shortcuts), can be understood as adaptations shaped by evolutionary pressures. Normative theories, on the other hand, advocate for decisions that maximize expected utility, grounded in mathematical models and ideal behavioral principles. This contrast underscores the complexity of decision-making as AI assumes roles traditionally filled by human cognitive processes (Damnjanović & Janković, 2014).

With the previously mentioned concept of heuristics in decision-making, scholars argue that the effectiveness of decision-making abilities tends toward fast and optimal solutions rather than slow ones constrained by practicality. This explains why users tend to opt for efficient cognitive shortcuts (heuristics) that, despite a tendency for bias in AI generations, offer a quicker, intuitive approach to decision-making compared to slower, analytical methods (Bankins et al., 2022; Kim et al., 2023; Semrl et al., 2023). Further arguments suggest that while humans perceive benefits and time savings from integrating AI into decision-making, this reliance may diminish cognitive capabilities by overshadowing human biological processing abilities (Politanskyi & Klymash, 2019; Tolan et al., 2021).

Some scholars delve further into neuroscience. The decision-making process encompasses the evaluation of potential actions and the interplay between cognitive and affective neurocircuits in forming preferences (Morelli et al., 2022; Yoder & Decety, 2018). This thorough examination of decision-making illuminates both the cognitive and neurological foundations of the process and highlights the diversity of approaches in analyzing decision-making strategies, bridging the gap between rational choice theory’s normative ideals and the practical adaptations identified by descriptive theories (Padilla et al., 2018). The decision-making abilities are influenced by the ventromedial prefrontal cortex (vmPFC) and the amygdala, highlighting the significant overlap between cognitive and emotional aspects of these processes (Ishikawa et al., 2020; Qiu et al., 2018; Tang et al., 2021). The vmPFC plays a pivotal role in encoding the value of rewards and punishments, integrating emotional and motivational data to guide decision-making (Hiser & Koenigs, 2018). Concurrently, the amygdala is essential for processing emotional reactions and fear learning, which are integral to evaluating the emotional salience of different options (Ishikawa et al., 2020). Furthermore, the prefrontal cortex (PFC) and the hippocampus emerge as critical structures in the decision-making landscape. The PFC is involved in various executive functions, including planning, reasoning, and problem-solving, which are crucial for making informed decisions (Qiu et al., 2018). The hippocampus, on the other hand, plays a key role in memory formation and retrieval, providing the necessary context to inform present decisions (Tang et al., 2021). Such cognitive processes are essential for academic rigor and avoiding laziness, suggesting that disengagement from challenging cognitive tasks could potentially weaken activities in key neural regions: the ventromedial prefrontal cortex (vmPFC) and the amygdala and prefrontal cortex (PFC) and the hippocampus (Friedman & Robbins, 2022).

Critical thinking abilities

Six studies examined the integration of AI in academic writing, revealing its positive impact on skills and efficiency while highlighting significant concerns about creativity, critical thinking, and ethical concerns such as plagiarism and algorithmic bias. Critical thinking is a multifaceted skill that encompasses more than just the ability to analyze an event; it involves synthesis, evaluation, and judgment based on specific criteria to ensure that evaluations are not made arbitrarily but are conducted with order and consistency (Malik et al., 2023; Marzuki et al., 2023). This comprehensive approach emphasizes the importance of a structured and criterion-based evaluation process (Dergaa et al., 2023). McPeck (2016) further define critical thinking as the capacity to identify, analyze, and evaluate the necessary components to achieve an accurate outcome, highlighting the goal-oriented nature of critical thinking in achieving precise results. Alkhatib (2019) describes critical thinking as a purposeful and logical approach employed in decision-making, problem-solving, and understanding fundamental concepts, emphasizing its utility across various domains of knowledge and action.

The overreliance on AI for information acquisition can negatively impact both critical thinking skills and dispositions (Guo & Lee, 2023). Critical thinking dispositions refer to the underlying attitudes and qualities that facilitate engagement in critical thinking activities, including the desire to be informed, the ability to consider multiple perspectives, the identification of relationships, reflective thinking, evidence-seeking, skepticism, respect for others’ views, and tolerance (Facione & Facione, 1996). Facione and Facione (1996) categorize these dispositions into six dimensions: inquisitiveness, open-mindedness, being systematic, analysis, truth-seeking, and confidence in reasoning. These dimensions encapsulate the essential traits that support and enhance the critical thinking process, from the curiosity to learn and the openness to diverse viewpoints to the methodical approach to problems and the reliance on evidence for problem-solving. Ersoy and Baser (2012) conducted a study with 615 primary education students to examine their critical thinking dispositions. The findings indicated that students struggled to develop higher-order thinking skills due to low scores in critical thinking disposition, suggesting a direct link between disposition and the ability to engage in complex cognitive processes. This supports the argument that nurturing critical thinking dispositions is crucial for developing effective critical thinking skills.

Critical thinking at higher levels involves considering evidence, context, conceptualization, methods, and the criteria required for judgment (Dergaa et al., 2023), emphasizing a comprehensive and evaluative approach to thinking that goes beyond surface-level analysis (Rodriguez & Towns, 2018). However, it has been noted in various studies that the education provided in faculties of education often fails to contribute to developing critical thinking dispositions. This results in educators who possess low to medium levels of critical thinking dispositions, underscoring the need for educational strategies that not only teach critical thinking skills but also foster the dispositions necessary for their effective application in teaching and learning contexts (McPeck, 2016).

Analytical thinking abilities

3 studies examined the use of AI dialogue systems and generative models in education and research, highlighting their potential to enhance research writing, customize learning, and provide 24/7 feedback, but caution against overdependence which may erode analytical and critical thinking skills, writing proficiency, and understanding of plagiarism, potentially fostering academic dishonesty. Duhaylungsod and Chavez’s (2023) study indicated that over-reliance on AI dialogue systems fostered complacency and an undue dependency on AI dialogue systems. In Kim et al.’s (2023) study, EFL learners using AI dialogue systems for various academic writing proposes that this undermines learners’ ability to analyze and grasp the information independently, thereby impairing their decision-making abilities. Over-reliance on AI poses a negative effect on analytical thinking abilities, as it can lead to uncritical acceptance of biased or inaccurate AI-generated content (hallucination) (Ismail, 2023). This underscores the need for heightened awareness and scrutiny of cognitive biases and their impact on analytical thinking abilities (Sok & Heng, 2023). Such biases, often stemming from overconfidence or ignorance in one’s perceptions, can significantly influence analytical thinking processes and the acceptance of information, including the uncritical reception of AI-generated content (Ismail, 2023).

Analytical thinking embodies thorough exploration and critical data analysis, vital for problem-solving and informed decision-making (Pokkakillath & Suleri, 2023). These elements are crucial for enhancing learning experiences, as they pertain to reasoning, planning, inquiry, interpretation of findings, and the subsequent derivation of conclusions in research and education (Ismail, 2023). Ideal analytical thinking is characterized by methodical reasoning in research and education, with the primary goal of uncovering truth and facts to facilitate learning. This process involves presenting well-founded reasons to support claims, which act as the premises leading to logical conclusions (Monteiro et al., 2020). However, scrutinizing and establishing arguments requires significant cognitive effort and time (Stromer-Galley et al., 2021). To streamline analytical thinking processes, we often resort to mental shortcuts (heuristics), which are generally efficient and beneficial in managing timely and content-related decisions and inadvertently lead us to accept incorrect information as factual (Kelley et al., 2023).

Challenges such as AI hallucinations, algorithmic biases, privacy and transparency issues, questions of source credibility, and social norms can disrupt the flow of the analytical thinking process. These obstacles may lead us to accept information as true or judge its veracity based on flawed natures, influencing our reasoning while overlooking critical evaluation steps (Fui-Hoon Nah et al., 2023; Kim et al., 2023).

To address over-reliance effectively, incorporating AI-assisted technologies can be a strategic approach to enhance learners’ critical thinking skills. For instance, AI can serve as a tool for stimulating engagement with critical thinking activities, encouraging learners to question assumptions, evaluate the reliability of information, and make informed decisions. By integrating AI into educational activities, learners can be guided to critically assess AI-generated content, understand its limitations, and appreciate the value of human insight and creativity. Activities that combine AI with traditional learning materials can challenge learners to scrutinize AI responses critically, identify biases, and develop balanced arguments or solutions based on a comprehensive understanding of the subject matter.

Educational strategies that utilize AI while emphasizing the development of critical thinking skills can help mitigate the risks associated with over-reliance. These strategies include prompting learners to compare AI-generated ideas with their own, encouraging reflection on the biases present in AI outputs, and engaging in activities that require critical evaluation and synthesis of information from AI and human sources alike. By actively involving learners in analyzing and evaluating AI-generated content, educators can foster a learning environment that not only leverages the benefits of AI but also cultivates essential cognitive skills such as critical analysis, problem-solving, and decision-making.

AI hallucination

The findings reveal that these AI dialogue systems embedded with generative models that closely mimic human writing and the enablement of automated dialogues hold significant potential across multiple fields, notably in education (Gao et al., 2022; Lee et al., 2023). Despite the evident benefits, the integration of such AI technologies in educational environments has provoked a spectrum of responses, some educators view this as a forward-thinking innovation that can enhance learning and teaching methodologies, but there exists a concern among others regarding its implications (Wach et al., 2023; Zhou et al., 2023). Critics argue that reliance on AI could diminish students’ critical thinking and analytical capabilities, as well as potentially foster academic dishonesty, such as language models that contain incorrect or misleading information presented as factual. The issue of AI hallucination became prominent around 2022 with the rise of LLMs like ChatGPT, which often produce responses interspersed with inaccuracies or fabrications (Khlaif et al., 2023). Reasons for AI hallucinations include data inconsistencies in vast datasets, training inaccuracies during the encoding and decoding phases, and biased sequences (Khlaif et al., 2023).

Li and Little (2023) posit that individuals using AI dialogue systems, irrespective of their expertise level in the subject matter, are at risk of becoming overly reliant on them. Those with lower subject matter expertise are particularly prone to trust the AI’s advice, even when it is incorrect—a phenomenon as mentioned before as "hallucination." The authors argue that this issue is especially prevalent during the initial training phases for individuals with limited knowledge in the subject area, which increases their likelihood of depending too much on AI dialogue systems. Such over-reliance can lead to suboptimal outcomes in their decision-making processes and tasks.

The phenomenon of hallucination poses a significant challenge for educators and students using generative dialogue systems and underscores the importance of awareness regarding AI hallucinations. An over-reliance on AI algorithms can result in complacency, which is particularly detrimental to the critical thinking abilities of clinicians, especially those with less experience (Gao et al., 2022; Lee et al., 2023).

Algorithmic bias

Ten studies identified instances of algorithmic bias within their research. The majority attributed these biases to the datasets used for training the algorithms. Alrazaq et al. (2023) highlighted that recent LLMS are developed using extensive datasets from various internet sources, including websites, books, news articles, scientific publications, and movie subtitles. Gichoya et al. (2023) argue that AI systems, trained on extensive datasets sourced from the internet, inherently mirror societal biases. This mirroring results in AI inadvertently perpetuating these biases (Tejani et al., 2023). One observable impact of this in educational settings is when dialogue systems use gendered pronouns based on stereotypes, which can negatively influence students’ learning experiences and improperly shape their perceptions of the world (O’Connor & Liu, 2023). The phenomenon of bias in educational assessments, recognized since the 1960s, foreshadowed many facets of bias and fairness currently under scholarly examination. These facets include societal, population, representative, aggregation, feedback, and reuse biases throughout the machine-learning lifecycle (Schwartz et al., 2022). Consequently, this issue introduces an ethical quandary by transferring the responsibility of ensuring fairness from policymakers to educators (Scatiggio, 2022). To tackle algorithmic bias effectively, a deliberate and analytical approach to developing and deploying AI in education is required. The goal is to harness the potential of dialogue systems in a manner that enhances learning without perpetuating societal biases (Gichoya et al., 2023).

Plagiarism

Three studies explored the academic challenges students face and the role of AI dialogue systems in creating a more inclusive educational environment amid broader concerns about plagiarism and the integrity of academic publications. Lim et al. (2023) argue that a significant challenge for higher education students from non-English speaking backgrounds is language barriers, which can impede their academic progress. This can lead to feelings of exclusion or fear of missing out and increase the risk of academic integrity breaches, like unintentional plagiarism. This finding is on par with the existing study, in the integration of AI technologies into education, plagiarism emerges as a significant ethical issue.

The ease of use of AI-powered tools like ChatGPT could tempt students to present AI-generated content as their own, undermining the integrity of academic work. This concern is amplified in education systems that prioritize outcomes, such as grades or qualifications, over the learning process itself, a trend observed in various phases of the Australian education system (Kumar et al., 2023). For non-native English speakers, AI dialogue systems offer substantial benefits in enhancing language proficiency, highlighting the dual-edged nature of AI in education (Hwang et al., 2023). De Angelis et al. (2023) found that the rise of journals that neglect essential quality controls, like verifying for plagiarism or ensuring ethical standards, might result in a significant influx of AI-generated articles within the scientific realm. Such a trend could gravely undermine the credibility of scientific studies and tarnish the prestige of scholarly publications. This finding is similar to Fyfe’s (2023) study, where the author found that the ability of AI dialogue systems to generate complex textual responses and complete assignments poses a risk of encouraging academic dishonesty, particularly in environments that value high grades and qualifications. Relying on AI for ethical risk assessments in research might overlook the educational value of students learning to identify and manage these risks themselves.

Addressing plagiarism in the AI context requires a multifaceted approach, emphasizing the importance of academic honesty and the detrimental effects of plagiarism on moral development and learning integrity (Lukac & Lazareva, 2023). Establishing clear policies on academic misconduct and introducing advanced plagiarism detection tools that can adapt to AI’s evolution are critical steps (Mulenga & Shilongo, 2024). Some skepticism remains about the ability of such technologies to stay ahead of AI advancements without generating false positives. Revising assessment methods to focus on understanding, originality, and skills beyond AI’s capabilities is advocated (Dalalah & Dalalah, 2023).

Privacy concerns

2 studies conducted an in-depth exploration into the matter of privacy concerns. Alrazaq et al. (2023) found that LLMS can lead to the disclosure of personal information by students and educators, such as names, email addresses, phone numbers, prompts, uploaded images, and AI-generated images. This finding echoes Kronemann et al’s (2023) study that integrating dialogue systems in research and education environments demonstrates a shift towards sophisticated data handling practices, including collecting, analyzing, and storing student information. This data, extending beyond academic performance to encompass sensitive personal details, enables predicting students at risk of falling behind, facilitating the development of targeted support and early intervention strategies. However, the advent of big data in education raises critical concerns regarding privacy and data protection, areas that remain underexplored in scholarly literature (Hu & Min, 2023). The transition from rule-based to more advanced NLP and machine-learning techniques in chatbot technology introduces additional complexities (Mahendran et al., 2021; Wu et al., 2023). These methods learn from data that may contain personal information, presenting a dilemma for encrypted data learning and highlighting the need for a nuanced approach to policy-making in this domain (Curzon et al., 2021).

Addressing these concerns requires a multi-faceted strategy emphasizing data protection, secure storage, and the anonymization or deletion of data post-use, ensuring its application remains strictly educational (Wu et al., 2023). Moreover, it is imperative to cultivate an environment of transparency and awareness among students, parents, educators, and stakeholders regarding data protection measures, promoting an informed understanding of how personal information is utilized within educational frameworks (Alawida et al., 2023).

Transparency concerns

Nine studies briefly address transparency concerns within dialogue systems; however, only a few undertake in-depth analysis. Dergaa et al. (2023) argue that biases and inaccuracies in AI systems can arise from a lack of transparency in the training datasets. The authors stressed the importance of educating students on the ethical use of dialogue systems and advocating for principles of honesty, integrity, and transparency. Some scholars believe that the complexity of transparency within trustworthy AI systems is multifaceted and context-dependent rather than being straightforward or following a linear relationship (Lucic et al., 2021; Mei et al., 2023). Finkenstadt and Handfield (2021) introduced two types of transparency: the visibility and accessibility of information. There is also a comprehensive review of 84 global ethical AI guidelines reveals that transparency (Larsson & Heintz, 2020), along with related concepts like explainability, interpretability, and disclosure, emerges as a predominant ethical principle, cited in 73 of these documents (Larsson, 2020). In practical terms, transparency is fundamental to ethical AI, encompassing accountability, traceability, justification, and a thorough evaluation of an AI system’s capabilities and limitations. It is divided into two distinct ethical principles: "failure transparency," which focuses on identifying the reasons behind an AI system’s harmful outcomes, and "judicial transparency," which emphasizes the explainability of judicial decision-making processes to experts (Du, 2022).

Transparency and privacy are interrelated (Larsson & Heintz, 2020). While AI offers valuable services, the general public is often uninformed about AI systems using their data and the potential privacy implications, an inherently unethical situation (Anshari et al., 2023).

Implications of integrating AI dialogue systems

This study contributes to the existing research from both theoretical and practical perspectives. Theoretically, this study identifies the need for balance between the benefits and the potential cognitive and ethical challenges. Over-reliance on AI can lead to diminished creativity and critical thinking abilities, as students may become too dependent on AI-generated content and less engaged in developing their ideas (Duhaylungsod & Chavez, 2023; Kim et al., 2023). This dependency can foster complacency and reduce essential problem-solving skills. Ethical issues such as plagiarism and data bias highlight the need for transparent and fair AI models to ensure academic integrity and fairness (Alrazaq et al., 2023; De Angelis et al., 2023). Additionally, the impact of AI on decision-making abilities and analytical thinking remains a significant area of concern, suggesting the need for further research into mitigating these adverse effects while leveraging AI's potential benefits (El Soufi & See, 2019; Pokkakillath & Suleri, 2023).

Practically, this study provides valuable information to higher education providers on the benefits of AI dialogue systems in enhancing efficiency in research and writing processes, improving writing proficiency, and increasing students’ self-confidence. Moreover, AI dialogue systems can greatly enhance the efficiency of academic tasks. For instance, students can streamline their research and writing processes, quickly retrieving information and generating well-structured content (Duhaylungsod & Chavez, 2023). This increased efficiency can improve writing proficiency and self-confidence as students produce higher-quality work in less time (Malik et al., 2023). Additionally, AI tools can provide immediate feedback, allowing for customized learning experiences and 24/7 support, which can be particularly beneficial in large classroom settings or for distance learning (Pokkakillath & Suleri, 2023).

However, these benefits come with substantial risks. Over-reliance on AI systems can lead to diminished creativity, as students in research and education might depend too heavily on AI-generated content, neglecting the development of their ideas and original thought processes. This dependency can foster complacency, making students less inclined to engage deeply with the material or develop essential problem-solving skills.

Ethical concerns are also significant, particularly regarding plagiarism and data bias. AI tools can generate content that, if used uncritically, may lead to unintentional plagiarism. The lack of comprehensive explanations accompanying AI-generated paraphrases can obscure the nuances of the content, potentially resulting in academic dishonesty (Kim et al., 2023). Moreover, biases in the data used to train AI models can perpetuate existing social biases, leading to discriminatory outcomes and skewed analyses (Grassini, 2023). Over-reliance on AI can also negatively impact critical and analytical thinking abilities, as students may become less adept at independently analyzing information, forming logical arguments, and making well-reasoned decisions. (Koos & Wachsmann, 2023). The convenience of AI-generated answers might deter students from engaging in thorough research and critical evaluation of sources, which are crucial for developing robust cognitive skills The convenience of AI-generated answers might deter students from engaging in thorough research and critical evaluation of sources, which are crucial for developing robust cognitive skills (Santiago Jr et al., 2023).

To mitigate these risks, it is essential to integrate AI dialogue systems in a balanced manner that promotes the development of critical and analytical thinking skills. Educational strategies should emphasize the importance of questioning AI-generated content, comparing it with human-generated insights, and understanding the limitations of AI. Encouraging reflection on the biases present in AI outputs and engaging students in activities that require critical evaluation and synthesis of information from diverse sources can foster a more nuanced and critical approach to using AI tools (Dergaa et al., 2023).

Conclusion

This systematic review has critically examined the implications of students' over-reliance on AI dialogue systems, especially those embedded with generative models, within educational and research contexts. The findings underscore the significant impact of such overdependence on essential cognitive abilities, including decision-making, critical thinking, and analytical reasoning. Despite the undeniable advantages of AI dialogue systems in streamlining research processes and enhancing academic efficiency, our analysis reveals a concerning trend: the potential erosion of critical cognitive skills due to ethical challenges such as misinformation, algorithmic biases, plagiarism, privacy breaches, and transparency issues. The nuanced exploration of these factors indicates a pressing need to address the ethical considerations surrounding using AI dialogue systems to prevent cognitive detriment among users.

Limitations

One of the primary limitations of this study is its narrow scope, focusing predominantly on the negative effects of over-reliance on AI dialogue systems in research and education without equally examining the potential benefits or outcomes in other domains. This limited perspective may result in a skewed understanding of the overall impact of AI dialogue systems. Additionally, the study is based on a review of literature from only 14 databases, which, although comprehensive, might not encompass the full range of existing studies on this topic, potentially omitting relevant findings from other significant databases.

The second limitation is that the study is primarily focused on higher education. It does not consider the cognitive developmental differences across various age groups, such as teenagers and primary or secondary school students. Therefore, a more inclusive approach considering different educational levels and age groups would be desirable for providing a more robust and comprehensive analysis of the effects of AI dialogue systems.

Recommendations

Educators and policymakers should integrate critical media literacy into curricula to equip students with the skills to critically evaluate AI-generated content. This includes developing an understanding of AI technologies' underlying mechanisms, potential biases, and ethical considerations. Institutions should implement AI literacy programs that emphasize the ethical use of AI technologies, highlighting the importance of maintaining cognitive skills such as critical thinking and analytical reasoning in the age of automation. Future studies should measure the cognitive impacts of using AI dialogue systems in educational settings. Such research could provide more concrete evidence to guide the development of best practices for AI integration.

Availability of data and materials

Not applicable.

References

Download references

Acknowledgements

Not applicable.

Funding

Funding was provided by Central Queensland University Australia.

Author information

Authors and Affiliations

Authors

Contributions

C.Zhai was responsible for the overall conceptualization, research, and writing of the manuscript. S.Wibowo contributed to the proofreading and editing of the manuscript. L.D.Li contributed to the proofreading of the manuscript.

Corresponding author

Correspondence to Chunpeng Zhai.

Ethics declarations

Ethics approval and consent to participate

The author intends to submit a comprehensive application for ethical clearance concerning both the research instrument and the data collection process to the CQUniversity Human Research Ethics Committee (CQUHREC). Given that this project constitutes a systematic review, it does not require an ethics approval number.

Consent for publication

The author(s) hereby grant the publisher the irrevocable, perpetual, and unrestricted right to publish, reproduce, distribute, and use the manuscript titled "The Effects of Over-Reliance on AI Dialogue Systems on Students' Cognitive Abilities: A Systematic Review" in any form or medium, including but not limited to digital and print formats. The author(s) affirm that the manuscript is original, has not been published elsewhere in any form, and is not currently under consideration for publication by another journal. The author(s) acknowledge that all data and materials as presented in the manuscript are either original or that all necessary permissions have been obtained for their use. The author(s) guarantee that the publication of the manuscript does not infringe on any third-party rights, including but not limited to privacy rights and intellectual property rights. Furthermore, the author(s) certify that all individuals who have a significant contribution to the manuscript are listed as authors and that consent for publication has been received from all co-authors. The author(s) also ensure that all participants involved in the study have given their consent for the publication of the data. For any individual under the age of 18, consent has been obtained from a parent or legal guardian. This consent for publication also extends to the use of the manuscript's content in subsequent editions, derivative works, and promotional activities conducted by the publisher, in any country and in any language.

Competing interests

The author(s) of the manuscript entitled "The Effects of Over-Reliance on AI Dialogue Systems on Students' Cognitive Abilities: A Systematic Review ' hereby declare that there are no non-competing interests related to this publication. Specifically, this declaration affirms that: None of the author(s) have any personal relationships, affiliations, or engagements within or outside the academic community that could inappropriately influence or bias the work presented in this manuscript. There are no direct or indirect financial interests, such as employment, consultancies, stock ownership, honoraria, patents, or grants, associated with the content of this publication that could be perceived as affecting the objectivity of the research or its interpretation. The authors have no involvement in any organization or entity with a financial or non-financial interest in the subject matter or materials discussed in this manuscript that could be construed as influencing the research. There are no personal connections, academic rivalries, or beliefs among the authors that could be seen as potential conflicts of interest regarding the manuscript's content, methodology, or conclusions.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhai, C., Wibowo, S. & Li, L.D. The effects of over-reliance on AI dialogue systems on students' cognitive abilities: a systematic review. Smart Learn. Environ. 11, 28 (2024). https://doi.org/10.1186/s40561-024-00316-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40561-024-00316-7

Keywords