Skip to main content

Exploring faculty perceptions and concerns regarding artificial intelligence Chatbots in nursing education: potential benefits and limitations

Abstract

Aim

To examine faculty perceptions of artificial intelligence (AI) chatbots in nursing education, focusing on their usage patterns, perceived benefits, and limitations.

Design

A cross-sectional study.

Methods

The study surveyed nursing faculty from Jordan and the United States using a self-reported questionnaire. Data were analyzed using descriptive statistics and Multivariate Analysis of Covariance to assess variations in perceptions based on AI chatbot usage frequency and faculty characteristics.

Results

Among 474 faculty members, 82.5% were familiar with at least one AI chatbot. Faculty generally acknowledged the benefits of AI chatbots, including enhanced teaching experiences, improved student engagement, support for independent learning, and quick access to medical knowledge. However, concerns about misinformation, reduced faculty-student interaction, and inadequacies in addressing complex clinical scenarios were prevalent. Legal and ethical issues, particularly the risk of misuse of AI-generated information, were also highlighted. Frequent AI chatbot users demonstrated significantly greater awareness of both the advantages and limitations of AI chatbots compared to infrequent users.

Conclusion

Frequent users demonstrated greater awareness of both the advantages and challenges of AI chatbots, highlighting the role of hands-on experience in shaping adoption. However, faculty adoption is primarily driven by perceived benefits rather than limitations, emphasizing the need to showcase practical advantages while addressing concerns.

Implications

To enhance faculty adoption of AI chatbots in nursing education, institutions should focus on demonstrating their practical benefits while addressing concerns through targeted training and ethical guidelines. Providing hands-on experience and structured exposure can increase faculty confidence, reinforcing both the usefulness of AI chatbots and strategies to mitigate their limitations. Future research may examine the effectiveness of hands-on training programs in shaping faculty perceptions and adoption behaviors, providing valuable insights for enhancing AI integration in nursing education.

Peer Review reports

Introduction

Artificial Intelligence (AI) utilizes computer systems to perform tasks that require advanced human intelligence, including understanding language, recognizing patterns, solving problems, and making decisions [1]. In recent years, AI has been increasingly integrated into various sectors, including healthcare and education, by automating complex tasks, enhancing decision-making, and facilitating personalized learning [1,2,3,4]. AI applications are widely used to improve diagnostic accuracy, expedite treatment planning, and support patient monitoring [5,6,7,8,9,10]. These advancements allow healthcare professionals to focus on patient-centered care while enabling AI to provide timely, evidence-based insights [5, 11,12,13]. Beyond clinical settings, AI technologies are becoming more prevalent in educational environments, offering innovative ways to support teaching and learning processes in healthcare education. In higher education, AI-driven tools, such as intelligent tutoring systems and virtual simulations, have demonstrated the potential to improve student engagement and academic performance [2,3,4]. These technologies provide personalized feedback, facilitate self-directed learning, and support educators in designing interactive and adaptive learning environments [3]. Given its rapid integration, AI has become a fundamental element in shaping contemporary teaching and learning strategies across various disciplines, including healthcare education [14].

In healthcare education, AI-driven chatbots have emerged as promising tools to enhance teaching and learning [15,16,17,18,19,20]. While primarily designed for human-like language interactions, these AI chatbots offer a wide range of functionalities that support both students and faculty [3, 15, 17, 18]. They can deliver interactive simulations, provide personalized feedback, and offer real-time academic support to students [3, 15, 16]. These AI chatbots can facilitate immediate access to information and enable self-assessment, promoting student engagement and independent learning [3, 15, 17, 18]. By supplementing traditional teaching methods, AI chatbots can enhance educational efficiency and provide consistent, on-demand support for students across various healthcare disciplines, particularly in environments with limited resources [3, 15, 16].

In nursing education, these capabilities are particularly significant. The ability of AI chatbots to provide customized learning experiences helps reinforce clinical competencies and supports the development of critical thinking and decision-making skills [17, 18, 21]. This is crucial in preparing competent, practice-ready nurses, as AI chatbots not only complement traditional instructional approaches but also encourage the adoption of innovative teaching strategies to meet the evolving demands of modern healthcare environments [22].

Despite the potential benefits of AI chatbots, users may perceive several limitations that could impede their effective integration into nursing education [15,16,17,18,19,20]. These concerns include the risk of providing inaccurate or oversimplified information, limited adaptability to evolving healthcare knowledge, and an inability to replicate the personalized guidance and emotional support offered by faculty. Additionally, users may express concerns regarding ethical issues, data privacy risks, the reliability of AI chatbot-generated information, and the potential for diminished critical thinking skills and reduced student-faculty interaction. These challenges present significant barriers to the successful adoption of AI chatbots in nursing education.

Faculty attitudes toward AI chatbots play a critical role in determining whether these tools are effectively integrated into nursing education [23, 24]. While AI technologies are becoming increasingly prevalent in educational settings, the specific use of AI chatbots remains underexplored. Research indicates that perceived benefits and limitations significantly influence an individual’s willingness to adopt new technologies, making faculty perceptions a key factor in shaping attitudes toward AI chatbot integration [23, 24]. Although existing studies have investigated AI in education focusing on tools such as e-learning platforms and simulation-based training, they rarely address the unique challenges and opportunities posed by AI-driven chatbots [25]. This leaves a notable gap in understanding how nursing faculty perceive and utilize these tools. Therefore, this study aims to examine patterns of AI chatbot usage among faculty members and their perceptions of AI chatbots in nursing education, focusing on perceived benefits and limitations. Additionally, it explores how faculty characteristics such as age, sex, academic position, years of academic experience, and institutional affiliation are associated with differences in AI chatbot usage patterns. The study further investigates how these usage patterns influence faculty perceptions of AI chatbots in teaching, research, and self-learning. The findings will provide valuable insights into factors influencing AI chatbot adoption and inform strategies for effectively integrating AI tools in nursing education.

Methods

Design, participants, and setting

A cross-sectional research design was employed to achieve the study’s objectives. Demographic information and perceptions regarding the use of AI chatbots in education were gathered from faculty members. Participants were required to be affiliated with a nursing science faculty and have at least one year of teaching experience. To determine the appropriate sample size for this study, an a priori power analysis was conducted using G*Power. A Multivariate Analysis of Variance was conducted to compare faculty perceptions of AI chatbots across four groups based on AI chatbot usage patterns while controlling for six covariates. The analysis was performed using a small effect size (η² = 0.02), a significance level of 0.05, and a statistical power of 0.80. Power analysis determined that a minimum sample size of 250 participants was required to detect a statistically significant effect.

This study was primarily conducted in Jordan, where AI adoption in nursing education is still in its early stages. Jordan was selected as the main setting to explore faculty perceptions in a context where digital transformation in higher education is ongoing. To provide a comparative perspective, the study also included nursing faculty from eight universities in the United States, a country with a well-established technological infrastructure and significant AI integration in education.

The contrast between the two countries’ educational models and AI adoption levels offers valuable insights into how faculty perceptions of AI chatbots may vary. The United States system emphasizes individual learning, critical thinking, and diverse teaching methodologies, whereas Jordan’s system follows a more traditional, instructor-led approach. Additionally, AI-driven education in the United States is well-established, incorporating advanced simulation labs, digital learning tools, and academic support systems. In contrast, AI adoption in Jordan is still developing. Including United States faculty provides a benchmark for assessing how AI is perceived in a technologically advanced setting compared to an emerging one. By examining faculty perspectives across these differing educational systems, the study explores how varying levels of technological integration and familiarity with AI tools shape faculty attitudes toward AI chatbots. This comparison offers valuable insights into the potential benefits and challenges of integrating AI chatbots into nursing education.

This study was conducted in all available nursing schools across different regions in Jordan, including four public and four private universities. In contrast, the eight nursing schools in the United States were selected based on the Times Higher Education rankings, with priority given to institutions recognized for academic excellence, research output, and technological integration in education. The selection process also considered evidence from university websites confirming the presence of well-established simulation labs and advanced digital learning tools. This approach ensured a diverse representation of institutions across the United States and captured insights from institutions with significant experience in integrating technology into nursing education.

Instruments

Measurement of variables

Demographic and professional characteristics

Self-reported questionnaires were used to collect demographic and academic professional data, which were essential for characterizing, interpreting, and analyzing the research findings. The demographic section included questions about age and gender, while the academic section gathered key professional information such as the highest level of education (Master’s or Doctorate degree), academic position (Instructor, Lecturer, Assistant, Associate, or Full Professor), years of teaching experience, and current university affiliation. Faculty members were categorized as being affiliated with either governmental or private institutions based on their employing university.

Perceptions of using AI chatbots

A Likert scale questionnaire was developed in English to assess faculty members’ perceptions regarding the use of AI chatbots in nursing education and research (Supplementary Material Appendix 1). The instrument was designed by the principal investigator and a co-author following a comprehensive literature review focusing on the integration of AI in healthcare and education. The review included academic papers, research articles, and surveys related to technology-enhanced learning.

The instrument consists of two sections. The first section includes six items addressing participants’ prior awareness of AI chatbots, usage patterns in teaching, self-learning, and research, and their attitudes toward increasing students’ awareness of these tools. The second section originally contained 41 items, divided into 15 items assessing perceived benefits and 26 items addressing potential limitations. Benefits assessed include enhancing educational experiences, supporting student understanding, promoting independent learning, and improving access to medical knowledge. Limitations explored concerns such as unreliable information, reduced faculty-student interaction, and ethical and legal challenges in integrating AI chatbots into educational settings. Items were rated on a 5-point Likert scale from 1 (strongly disagree) to 5 (strongly agree), with a “Not Sure = 0” option for participants unfamiliar with AI chatbots or uncertain about their responses.

Content and face validity were established by a panel of five experts: two from the School of Information Technology specializing in digital learning, and three health sciences faculty with extensive clinical, teaching, and digital expertise. Based on their recommendations, minor revisions (e.g., rephrasing and removing redundant items) were made to improve clarity, relevance, and comprehensiveness. After reaching consensus, the final instrument was refined to 34 items; 12 addressing benefits and 22 addressing limitations. A pilot test with 20 nursing faculty members confirmed the clarity and usability of the questionnaire, requiring no further revisions.

Construct validity was confirmed through exploratory factor analysis using principal component analysis with Varimax rotation. The analysis revealed a five-factor structure, demonstrating the instrument’s ability to capture distinct aspects of faculty perceptions of AI chatbots in nursing education. Perceived benefits were grouped into a single dimension, while limitations were categorized into four factors. These factors included Perceived Educational Benefits (12 items; loading range: 0.58–0.92; Cronbach’s alpha = 0.93), Concerns about Interaction and Personalization (7 items; loading range: 0.53–0.79; Cronbach’s alpha = 0.88), Limitations in Learning and Adaptability (8 items; loading range: 0.41–0.74; Cronbach’s alpha = 0.90), Implementation and Integration Concerns (4 items; loading range: 0.60–0.78; Cronbach’s alpha = 0.78), and Reliability and Misinformation Concerns (3 items; loading range: 0.63–0.71; Cronbach’s alpha = 0.71). Finally, the instrument demonstrated strong overall internal consistency, with a total Cronbach’s alpha of 0.95, confirming its reliability for assessing faculty perceptions toward AI chatbots in nursing education.

For the purpose of this study, we presented individual perceptions through descriptive analysis and used total scores for each dimension to examine differences in faculty perceptions based on demographic and professional characteristics, and usage patterns in teaching, research, and self-learning.

Procedure

The Institutional Review Board at the University of Jordan provided approval for this study. A research assistant ensured that faculty members met the inclusion criteria by retrieving their curriculum vitae from university websites where available. Once potential participants were identified, they were contacted via email and provided with detailed information about the study, including its objectives and the significance of their participation. Each email contained a link to access the survey instruments. Participants were asked to review and sign an informed consent form before beginning the survey. To enhance response rates, a follow-up email was sent one week after the initial invitation to remind participants who had not yet completed the survey. This reminder email restated the importance of their contribution and provided the link to the survey instruments again.

Data analysis

Data were analyzed using the Statistical Package for the Social Sciences (SPSS), version 25. Descriptive statistics, including means and standard deviations, medians and interquartile ranges, or frequencies and percentages, were used to summarize faculty members’ demographic and professional characteristics, familiarity with AI chatbots, usage patterns, specific applications used, and perceptions of the benefits and limitations of AI chatbots.

Chi-square tests and one-way analyses of variance were conducted to examine whether faculty usage patterns of AI chatbots differed based on demographic and professional characteristics, including age, sex, academic position, years of academic experience, and institutional affiliation. Chi-square tests and t-tests were also used to compare AI chatbot usage patterns and demographic and professional characteristics between faculty members in the United States and Jordan. A Multivariate Analysis of Covariance was performed to assess differences in faculty perceptions of AI chatbot benefits and limitations based on usage patterns in teaching, research, and self-learning, while controlling for age, sex, academic position and location, years of academic experience, and institutional affiliation. The Least Significant Difference test was used for post hoc comparisons, with statistical significance set at p ≤.05.

Upon examining response patterns, we found that participants selecting “Not Sure” were predominantly those who occasionally to never used AI chatbots, indicating that their uncertainty stemmed from a lack of direct experience rather than a neutral stance. To ensure that these responses were meaningfully incorporated into the analysis, “Not Sure” was treated as 0, as it reflects an absence of perceived benefits or concerns regarding AI chatbots. This approach is consistent with prior research on survey response behavior, which suggests that uncertain responses often indicate a lack of engagement rather than a balanced perspective “Neutral” [26, 27]. To validate this coding decision, a sensitivity analysis was conducted, comparing three different treatments of “Not Sure” responses: coding “Not Sure” as 0 (current approach), excluding these responses from the analysis using listwise deletion, and as “Neutral = 3” to assess their impact on response distributions. The results confirmed that treating “Not Sure” as 0 did not significantly alter the study’s main findings, supporting this methodological decision. Therefore, this approach ensures that responses from faculty members with limited AI chatbot exposure are appropriately accounted for without inflating neutrality or biasing the ordinal data structure.

Results

A total of 474 faculty members participated in this study, with approximately 44% from Jordan and 56% from the United States. The response rate was 61% for faculty members from Jordan and 29% for those from the United States. The mean age of the participants was 46 ± 8 years. Most of the participants were female (80%), affiliated with community universities (88%), held a Doctor of Philosophy (PhD) degree in nursing (82%), and were either associate or full professors (59%). On average, participants had 15 ± 9 years of academic experience. A higher percentage of faculty members from the United States were female compared to those from Jordan (97% vs. 60%) and had longer academic experience (16 ± 9 years vs. 14 ± 8 years). There were no significant differences in age, educational level, or academic position between faculty members from the two countries.

Approximately 17.5% of faculty members reported being unfamiliar with AI chatbots and had never used them for academic activities. No significant differences in AI chatbot familiarity were found between faculty members from Jordan and the United States. Additionally, there were no differences in demographic or professional characteristics between those who were familiar with AI chatbots and those who were not. Among the 82.5% of faculty members (n = 391) who were familiar with one or more AI chatbot applications, the majority reported using AI chatbots occasionally (35.8%) or rarely (34.0%), while 17.7% indicated frequent use. About half of the respondents used AI chatbots occasionally or frequently in teaching, and 41% used them for other academic activities, such as self-learning and research. The most recognized AI chatbot applications were ChatGPT by OpenAI (70%) and Google Assistant (43%), with ChatGPT (54%) and Google Assistant (32%) also being the most commonly used. There was broad consensus among faculty members that students should receive training on how to use AI chatbots, as indicated by a median rating of 4 (3, 4). Moreover, support for this belief increased significantly with higher AI chatbot usage frequency (p <.001). AI chatbot usage patterns did not vary based on age, gender, years of academic experience, academic position, or university affiliation, suggesting that AI chatbot adoption is consistent across diverse faculty demographics. Furthermore, no significant differences in usage patterns were found between faculty members from Jordan and the United States (Table 1).

The median and interquartile range ratings of perceived benefits and limitations of AI chatbots in nursing education are summarized in Table 2. There was a high level of agreement on most items related to the perceived benefits of AI chatbots. These benefits included their role in enhancing the teaching and learning experience, providing valuable educational tools, supporting the understanding of complex concepts, and facilitating self-assessment. All of these were consistently rated with a median value of 4 and interquartile ranges of 3 to 4. Similarly, there was strong agreement on AI chatbots’ ability to enhance overall academic performance in nursing science programs, encourage innovative teaching approaches, offer an interactive approach to healthcare teaching, provide faculty with valuable insights into students’ learning progress, support academic projects, and promote innovative teaching methods in nursing education.

Table 1 Demographic and academic professional characteristics, AI chatbot familiarity, and usage patterns among faculty members from Jordan and the United States
Table 2 The median and interquartile range ratings of perceived benefits and limitations of AI chatbots in nursing education

AI chatbots were also perceived as effective in offering quick access to medical knowledge, with a median of 4 (4, 4), demonstrating general consensus on their positive influence in fostering independent learning in nursing education. Items that reflected the enhancement of students’ ability to apply theoretical knowledge to practical scenarios received a median of 4, although with a slightly wider interquartile range of 2 to 4, suggesting greater variability in respondents’ opinions. In contrast, items such as the role of AI chatbots in assisting with examination preparation had a median of 3 and an interquartile range of 2 to 4, indicating a broader range of perspectives among faculty members.

Several consistent concerns were raised regarding AI chatbots’ limitations. Key issues included the potential for AI chatbots to provide incorrect or unreliable information, decrease faculty-student interaction, and inadequately address complex nursing questions, with median ratings of 4 (3, 4). Faculty members also expressed concerns that AI chatbots might promote passive learning rather than active engagement and fail to offer the depth of understanding that faculty members can provide. Additionally, concerns were noted about overreliance on technology, lack of personalized guidance and support, challenges in addressing practical clinical skills, difficulties in integrating AI chatbots into existing teaching methods and resources, and limited adaptability to students’ unique learning styles. Finally, uncertainties about effectively incorporating AI chatbots into educational practices and adapting them to evolving course materials or educational needs were also rated with median values of 4 (3, 4).

Additional limitations, such as AI chatbots’ inability to provide emotional support, were rated with a median of 4 and a narrower interquartile range of 4 to 4, indicating consensus on this concern. Legal and ethical concerns, particularly regarding the potential misuse or misunderstanding of information provided by AI chatbots in students’ education, were rated with a median of 4 and a higher interquartile range of 4 to 5, highlighting significant faculty worry in these areas.

Other items, such as the potential for AI chatbots to hinder the development of critical thinking and problem-solving skills, their ability to provide real-world scenarios and relevant case studies, and their capacity for personalized learning experiences, received more varied responses, with a median of 4 and interquartile ranges of 2 to 4, indicating mixed opinions. Concerns about job insecurity and the adaptability of AI chatbots to evolving nursing care trends were moderately expressed, with a median of 3. Items reflecting skepticism about the likelihood of AI chatbots replacing faculty members received lower median values of 2 (2, 3), suggesting less agreement on these issues among respondents.

A Multivariate Analysis of Covariance revealed significant variations in faculty perceptions based on age, university affiliation, years of academic experience, academic position, and AI chatbot usage patterns (Table 3). Faculty with higher age reported significantly greater perceived benefits of AI chatbots (B = 65.7, p <.001, η² = 0.30). Similarly, frequent AI chatbot users were more likely to recognize the positive impact of AI chatbots on academic performance. In contrast, faculty who never (B = -18.4, p <.001, η² = 0.22), rarely (B = -6.8, p <.001, η² = 0.06), or occasionally (B = -5.6, p <.001, η² = 0.04) used AI chatbots reported significantly lower benefit scores compared to frequent users.

Table 3 A multivariate analysis of covariance on perceived benefits and limitations of AI chatbots

Conversely, the results indicate that increased exposure to AI chatbots is associated with a greater awareness of their limitations. Faculty who never used AI chatbots reported significantly lower concerns about learning adaptability (B = -3.4, p =.002, η² = 0.02), reliability and misinformation (B = -3.7, p <.001, η² = 0.11), implementation and integration (B = -1.8, p =.003, η² = 0.02), and interaction and personalization (B = -4.2, p =.002, η² = 0.03) compared to frequent users. Similarly, faculty who rarely used AI chatbots expressed fewer concerns about reliability and misinformation (B = -1.0, p =.03, η² = 0.013) and implementation and integration (B = -1.1, p =.02, η² = 0.01), suggesting that limited exposure reduces awareness of these challenges.

Faculty who occasionally used AI chatbots consistently reported higher concerns than those who never or rarely used AI chatbots and lower than frequent users. However, their concerns about learning adaptability (B = -0.6, p =.49, η² = 0.001), reliability and misinformation (B = -0.5, p =.30, η² = 0.003), and implementation and integration (B = -0.5, p =.26, η² = 0.003) were not statistically significant when compared to frequent users. Only their concerns about interaction and personalization (B = -2.7, p =.01, η² = 0.02) were significantly lower than those of frequent users.

Beyond AI chatbot usage patterns, other faculty characteristics also influenced perceptions of AI chatbots. Faculty with fewer years of academic experience (B = -0.3, p <.001, η² = 0.04) and those in higher academic positions (B = 1.1, p =.002, η² = 0.03) expressed greater concerns about interaction and personalization. Concerns about implementation and integration were more pronounced among older faculty members (B = 0.1, p =.04, η² = 0.01), particularly those affiliated with community universities (B = 1.0, p =.05, η² = 0.01) and those in higher academic positions (B = 0.4, p =.004, η² = 0.02). Additionally, faculty at community universities expressed significantly higher concerns about reliability and misinformation compared to those at private institutions (B = 1.1, p =.02, η² = 0.02). However, gender and university affiliation (Jordan vs. United States) had no influence on faculty perceptions regarding the benefits and limitations of AI chatbots, indicating that concerns and attitudes toward AI chatbots were consistent across these demographic variables.

Discussion

The findings of this study revealed significant variations in faculty perceptions of AI chatbots in nursing education based on age, university affiliation, years of academic experience, academic position, and usage patterns. Among these factors, age emerged as a key influence, with older faculty members reporting significantly greater perceived benefits of AI chatbots. This may indicate that more experienced educators were aware of AI tools’ potential to enhance teaching efficiency, support student learning, and supplement traditional instructional methods. These findings aligned with prior research indicating that experienced faculty were more likely to view technology as a valuable strategy for improving educational outcomes and streamlining academic tasks [28, 29]. Their positive perception may have resulted from greater familiarity with the demands of teaching and their ability to identify practical applications of AI chatbots in academic settings.

Similarly, faculty who frequently used AI chatbots reported greater recognition of their positive impact on academic performance, while those who never, rarely, or occasionally used AI chatbots reported significantly lower benefit scores. This finding aligned with previous research on technology adoption, which suggested that direct experience with digital tools increased perceived usefulness and willingness to integrate them into educational practice [28]. Faculty who regularly engaged with AI chatbots may have had greater exposure to their functionalities, reinforcing their confidence in these tools’ educational value. Conversely, faculty with limited or no experience may have hesitated to perceive AI chatbots as beneficial due to uncertainty about their effectiveness in structured academic settings. Together, these results suggested that faculty experience and exposure to AI technologies played a crucial role in shaping perceptions and willingness to integrate AI tools into educational practices.

Beyond differences in faculty characteristics and AI chatbot usage patterns, there was strong overall agreement on the key benefits of AI chatbots. Faculty consistently identified AI chatbots as valuable tools for enhancing teaching and learning experiences, providing quick access to medical knowledge, supporting the understanding of complex concepts, and facilitating self-assessment. These findings aligned with research highlighting the role of AI chatbots in increasing student engagement, improving knowledge retention, and supporting skill development in nursing education [30]. The consensus on AI chatbots’ ability to enhance overall academic performance, promote innovative teaching approaches, and support academic projects reflected a growing recognition of AI’s role in modernizing nursing education. However, the variability in opinions regarding AI chatbots’ effectiveness in assisting with examination preparation suggested that while they were generally viewed as effective supplementary tools, their application may have been more beneficial in some educational contexts than others. This aligned with broader discussions in the literature about the promises and challenges of AI-based chatbots in education, emphasizing the need for careful integration to maximize benefits while addressing potential limitations [31].

The findings further indicated that greater exposure to AI chatbots was associated with heightened awareness of their limitations, suggesting that faculty familiarity with these tools led to a more critical evaluation of their challenges. Faculty members who never or rarely used AI chatbots reported significantly lower concerns across multiple limitation dimensions, including learning adaptability, reliability and misinformation, implementation and integration, and interaction and personalization. This suggested that limited exposure may have reduced awareness of AI chatbot-related challenges, whereas frequent users developed a more nuanced understanding of both the advantages and drawbacks of AI chatbots in nursing education. This observation aligned with the Technology Acceptance Model, which suggested that user experience influenced perceived usefulness and ease of use, thereby affecting attitudes toward technology adoption [32].

A particularly noteworthy finding was that faculty who occasionally used AI chatbots exhibited an intermediate perception. While their concerns were generally higher than those of non-users, they were lower than those of frequent users across most limitation dimensions. However, their concerns about interaction and personalization remained significantly lower than those of frequent users. This suggested that as faculty gained moderate exposure to AI chatbots, they may have recognized some of their limitations but not to the extent of those who used them regularly. This highlighted the potential role of direct engagement in shaping more informed perspectives on the role and constraints of AI-driven educational tools. This finding was consistent with studies indicating that hands-on experience with educational technology fostered a more comprehensive understanding of its capabilities and limitations [32, 33].

Beyond usage patterns, certain faculty characteristics also influenced perceptions of AI chatbot limitations. Faculty with fewer years of academic experience and those in higher academic positions expressed greater concerns about interaction and personalization. One possible explanation for this pattern was that less experienced faculty may have felt less confident in integrating new technologies into their teaching practices due to limited exposure to emerging educational tools. Without extensive experience, they may have been more aware of AI chatbots’ challenges, such as their inability to reproduce subtle faculty-student interactions, but less familiar with strategies to overcome these limitations. This aligned with research suggesting that faculty with limited teaching experience viewed technological innovations as disruptive rather than supportive, particularly when they lacked adequate training or institutional support [34]. In contrast, senior faculty members, despite recognizing the benefits of AI chatbots, may have been more skeptical about their ability to foster meaningful faculty-student interactions or personalize learning experiences. This skepticism was often rooted in a preference for traditional pedagogical methods and concerns about the potential threat that AI posed to the educator’s role. Prior research suggested that some educators were hesitant to accept AI tools due to doubts about their fears of diminished personalized learning and concerns about overreliance on technology, which could limit the development of critical thinking skills [35, 36].

Similarly, older faculty members, particularly those affiliated with community universities, expressed heightened awareness regarding the implementation and integration of AI chatbots into nursing curricula. This may reflect their deeper understanding of the teaching process and a greater ability to identify both the opportunities and challenges associated with adopting new technologies. Faculty with extensive academic experience may perceive AI chatbots as valuable tools for enhancing instructional efficiency and supporting student learning while also recognizing the adjustments required for successful integration. This interpretation aligns with previous research suggesting that experienced educators, due to their familiarity with pedagogical demands, are more likely to view technology as a means of improving educational outcomes and optimizing academic workflows [37].

Concerns about reliability and misinformation were notably higher among faculty at community universities, which comprised the majority of our sample, compared to those at private institutions. This heightened skepticism may reflect the advanced technological resources available at these institutions, enabling faculty to apply stricter standards when evaluating AI chatbot-generated content [38]. With greater capacity to critically assess these tools, faculty may have developed increased doubts about whether AI chatbots can reliably enhance teaching without compromising quality. Prior research supports these concerns, indicating that while AI chatbots can efficiently deliver information, they may also produce errors, biased outputs, or oversimplified responses, which could lead to misconceptions or overreliance on automated feedback [35,36,37]. Addressing these concerns requires ensuring the accuracy of AI chatbot tools and providing faculty with comprehensive training to integrate them confidently as effective supplements within high-tech educational environments.

Despite variations in faculty perceptions based on usage patterns and demographic factors, there was strong overall agreement on the key limitations of AI chatbots, which aligns with prior research [35,36,37]. Faculty members consistently highlighted concerns such as the potential for incorrect or oversimplified information, reduced faculty-student interaction, and inadequacies in addressing complex or unique nursing scenarios. These concerns emphasize the specialized nature of healthcare education, which often requires tailored responses, critical thinking, and the ability to adapt to unique patient needs. Unlike human educators, who can adjust their teaching to meet diverse student needs and address real-world clinical complexities, AI chatbots may lack the ability to deliver the depth of understanding and personalized responses required for effective healthcare education.

In addition to these practical concerns, faculty members also raised broader issues related to technology dependence and ethical considerations. There was widespread concern about overreliance on AI chatbots, particularly regarding their inability to provide personalized guidance or emotional support, which are vital to student engagement and comprehension in clinical settings. Faculty also expressed legal and ethical concerns, particularly regarding the accuracy and appropriate use of AI-generated information. Notably, there was a shared emphasis on the importance of maintaining critical evaluation skills to prevent misuse of AI chatbot information and over-dependence on technology, which could negatively impact the development of essential clinical competencies. Addressing these concerns requires ongoing professional development, the establishment of ethical guidelines, and institutional support to ensure that AI chatbots function as complementary tools rather than as substitutes for the irreplaceable human element in nursing education.

The study also revealed that ChatGPT by OpenAI and Google Assistant were the most recognized and widely used AI chatbots among faculty members. This may be due to ChatGPT’s sophisticated natural language processing capabilities, which make it suitable for generating detailed responses and facilitating complex discussions relevant to educational contexts [39]. Google Assistant’s ease of use and integration with various platforms allow for quick information retrieval, making it a convenient tool in academic settings [40].

One notable limitation of this study is the disparity in response rates between faculty members from Jordan (61%) and the United States (29%). This difference could introduce potential sampling bias, as faculty members from Jordan who chose to participate may have had a greater interest in AI chatbots compared to their counterparts in the United States. Nevertheless, several aspects of the study help minimize this concern. First, an analysis of demographic and professional characteristics (e.g., age, academic position, and familiarity with AI chatbots) revealed no significant differences between faculty respondents from the two countries. This suggests that, despite the variation in response rates, the sampled populations remain comparable in key aspects. Second, the study included faculty members from diverse institutions (four public and four private universities in Jordan, and eight community universities across multiple regions in the United States) enhancing the representativeness of the findings. Third, lower response rates among United States based faculty surveys have been documented in prior research, often attributed to higher administrative workloads and institutional email filtering systems, which may have affected the reach of the survey invitation [41].

Another limitation of this study is the lack of analysis on how cultural and systemic differences between Jordanian and United States faculty may have influenced perceptions of AI chatbots. While educational and technological disparities were acknowledged, their specific impact on faculty attitudes was not explicitly examined. Future research should explore these contextual factors to provide a more comprehensive understanding of cross-cultural variations in AI chatbots adoption. Additionally, the study relies on self-reported perceptions of AI chatbots without incorporating objective measures of their efficacy in improving educational outcomes. This limits the ability to assess the actual impact of AI chatbots on student learning and academic performance. Future research should address this gap by including objective performance data to validate faculty perceptions. Finally, the study was conducted within a specific academic context, which may limit the generalizability of the findings to other nursing or health science programs. Expanding the scope to different educational environments will help provide broader insights into the role of AI chatbots in nursing education.

Conclusion and implication

The findings suggest that faculty members’ perceptions of the benefits of AI chatbots play a more influential role in shaping their attitudes toward adopting these tools than their awareness of limitations. This is evidenced by the fact that faculty who frequently used AI chatbots not only recognized their challenges but also reported greater perceived benefits compared to those with limited or no exposure. According to the Technology Acceptance Model, perceived usefulness is a key determinant of technology adoption, suggesting that when faculty members identify tangible advantages, such as enhancing teaching efficiency, supporting student learning, and facilitating independent knowledge acquisition, they are more likely to utilize AI chatbots despite recognizing their limitations. Frequent users’ heightened awareness of both benefits and drawbacks aligns with the Technology Acceptance Model’s assertion that direct experience influences perceptions of usefulness and ease of use, ultimately shaping attitudes and willingness to integrate new technologies. This suggests that providing practical exposure and hands-on training may be critical in increasing adoption, as familiarity not only deepens understanding of limitations but also reinforces confidence in the educational value of AI chatbots. Future research should explore the effectiveness of such hands-on training programs in shaping faculty perceptions and adoption behaviors, offering valuable insights for optimizing AI integration in nursing education.

To effectively promote the adoption of AI chatbots in nursing education, institutions should emphasize their practical benefits while addressing concerns through targeted training and ethical guidelines. This approach will help foster faculty engagement and facilitate the successful integration of AI chatbots. Specifically, institutions should create comprehensive training programs focused on the practical use of AI chatbots and establish clear policies to address concerns about misinformation, ethical issues, and maintaining student engagement.

Data availability

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

Abbreviations

AI:

Artificial Intelligence

References

  1. Ali O, Abdelbaki W, Shrestha A, Elbasi E, Alryalat MAA, Dwivedi YK. A systematic literature review of artificial intelligence in the healthcare sector: benefits, challenges, methodologies, and functionalities. J Innov Knowl. 2023;8(1):100333.

    Article  Google Scholar 

  2. Ouyang F, Zhang L. AI-driven learning analytics applications and tools in computer-supported collaborative learning: A systematic review. Educational Res Rev. 2024;44:100616.

    Article  Google Scholar 

  3. Lin C-C, Huang AYQ, Lu OHT. Artificial intelligence in intelligent tutoring systems toward sustainable education: a systematic review. Smart Learn Environ. 2023;10(1):41.

    Article  Google Scholar 

  4. Jallad ST, Alsaqer K, Albadareen BI, Al-maghaireh D. Artificial intelligence tools utilized in nursing education: Incidence and associated factors. Nurse Education Today. 2024;142:106355.

  5. González-García J, Tellería-Orriols C, Estupiñán-Romero F, Bernal-Delgado E. Construction of empirical care pathways process models from multiple Real-World datasets. IEEE J Biomedical Health Inf. 2020;24(9):2671–80.

    Article  Google Scholar 

  6. Charan S, Khan MJ, Khurshid K. March. Breast cancer detection in mammograms using convolutional neural network. Paper presented at: 2018 International Conference on Computing, Mathematics and Engineering Technologies (iCoMET); 3–4. 2018.

  7. Bouhenguel R, Mahgoub I. A risk and incidence based atrial fibrillation detection scheme for wearable healthcare computing devices. Paper presented at: 2012 6th International Conference on Pervasive Computing Technologies for Healthcare and Workshops, Pervasive Health. 2012.

  8. Aghazadeh S, Aliyev AQ, Ebrahimnejad M. Oct. The role of computerizing physician orders entry (CPOE) and implementing decision support system (CDSS) for decreasing medical errors. Paper presented at: 2011 5th International Conference on Application of Information and Communication Technologies (AICT); 12–14. 2011.

  9. Amrane M, Oukid S, Gagaoua I, Ensarİ T. April. Breast cancer classification using machine learning. Paper presented at: 2018 Electric Electronics, Computer Science, Biomedical Engineerings’ Meeting (EBBT); 18–19. 2018.

  10. Bennett C, Doub T, Bragg A, et al. Data Mining Session-Based Patient Reported Outcomes (PROs) in a Mental Health Setting: Toward Data-Driven Clinical Decision Support and Personalized Treatment. Paper presented at: 2011 IEEE First International Conference on Healthcare Informatics, Imaging and Systems Biology; 26–29 July. 2011.

  11. Chen Z, Marple K, Salazar E, Gupta G, Tamil L. A Physician Advisory System for Chronic Heart Failure management based on knowledge patterns. Paper presented at: Theory and Practice of Logic Programming. 2016.

  12. Ling Y, An Y, Liu M, Hu X. Dec. An error detecting and tagging framework for reducing data entry errors in electronic medical records (EMR) system. Paper presented at: 2013 IEEE International Conference on Bioinformatics and Biomedicine; 18–21. 2013.

  13. Murray M, Macedo M, Glynn C. Nov. Delivering Health Intelligence For Healthcare Services. Paper presented at: 2019 First International Conference on Digital Data Processing (DDP); 15–17. 2019.

  14. McDonald N, Johri A, Ali A, Collier AH. Generative artificial intelligence in higher education: Evidence from an analysis of institutional policies and guidelines. Computers in Human Behavior: Artificial Humans. 2025;3:100121.

  15. Walter Y. Embracing the future of Artificial Intelligence in the classroom: the relevance of AI literacy, prompt engineering, and critical thinking in modern education. Inter Jour Educ Techn High Educ. 2024;21(1):15.

  16. Francis NJ, Jones S, Smith DP. Generative AI in higher education: balancing innovation and integrity. Br J Biomedical Sci 2025-January-09 2025. 2024;81.

  17. Sallam M. ChatGPT utility in healthcare education, research, and practice: systematic review on the promising perspectives and valid concerns. Healthc (Basel Switzerland) Mar 19. 2023;11(6).

  18. Khan RA, Jawaid M, Khan AR, Sajjad M. ChatGPT - Reshaping medical education and clinical management. Pakistan J Med Sci Mar-Apr. 2023;39(2):605–7.

    Google Scholar 

  19. Sheikh A, Anderson M, Albala S, et al. Health information technology and digital innovation for national learning health and care systems. The Lancet Digital Health. 2021;3(6):e383-e396.

  20. Zhang Y, Pei H, Zhen S, Li Q, Liang F. Chat Generative Pre-Trained Transformer (ChatGPT) usage in healthcare. Gastroenterology & Endoscopy. 2023;1(3):139–143.

  21. Le Lagadec D, Jackson D, Cleary M. Artificial intelligence in nursing education: prospects and pitfalls. J Adv Nurs. 2024;80(10):3883–5.

    Article  PubMed  Google Scholar 

  22. Fawaz MA, Hamdan-Mansour AM, Tassi A. Challenges facing nursing education in the advanced healthcare environment. Intern Jour Afri Nurs Sci. 2018;9:105–110.

  23. Ali I, Warraich NF, Butt K. Acceptance and use of artificial intelligence and AI-based applications in education: A meta-analysis and future direction. Inform Dev. 2024:02666669241257206. https://doiorg.publicaciones.saludcastillayleon.es/10.1177/02666669241257206

  24. Zhang C, Schießl J, Plößl L, Hofmann F, Gläser-Zikuda M. Acceptance of artificial intelligence among pre-service teachers: a multigroup analysis. Intern Jour Educ Tech High Educ. 2023;20(1):49.

  25. Lifshits I, Rosenberg D. Artificial intelligence in nursing education: A scoping review. Nur Educ Prac. 2024;80:104148.

  26. Kankaraš M, Capecchi S. Neither agree nor disagree: use and misuse of the neutral response category in Likert-type scales. Metron. 2024.

  27. Zeng B, Jeon M, Wen H. How does item wording affect participants’ responses in likert scale? Evidence from IRT analysis. Front Psychol. 2024;15:1304870.

    Article  PubMed  PubMed Central  Google Scholar 

  28. John SP. The integration of information technology in higher education: A study of faculty’s attitude towards IT adoption in the teaching process. Contaduría y Administración. 2015;60:230–252.

  29. Mah D-K, Groß N. Artificial intelligence in higher education: exploring faculty use, self-efficacy, distinct profiles, and professional development needs. Intern Jour Educ Tech High Educ. 2024;21(1):58.

  30. Labrague LJ, Sabei SA. Integration of AI-Powered Chatbots in Nursing Education: A Scoping Review of Their Utilization, Outcomes, and Challenges. Teach Learn Nur. 2025;20(1):e285-e293.

  31. Srinivasan M, Venugopal A. Navigating the pedagogical landscape: exploring the implications of AI and chatbots in nursing education. Jun. 2024;13:7:e52105.

    Google Scholar 

  32. Davis FD, Perceived, Usefulness. Perceived ease of use, and user acceptance of information technology. MIS Q. 1989;13(3):319–40.

    Article  Google Scholar 

  33. Abouammoh N, Alhasan K, Aljamaan F, et al. Perceptions and earliest experiences of medical students and faculty with ChatGPT in medical education: qualitative study. JMIR Med Educ. 2025;11:e63400.

  34. Ofosu-Ampong K. Beyond the hype: exploring faculty perceptions and acceptability of AI in teaching practices. Discover Educ 2024/04/23. 2024;3(1):38.

    Article  Google Scholar 

  35. Kirkwood A, Price L. Technology-enhanced learning and teaching in higher education: what is ‘enhanced’ and how do we know? A critical literature review. Learn Med Tech. 2014;39(1):6–36.

  36. Chaudhry MA, Kazim E. Artificial intelligence in education (AIEd): a high-level academic and industry note 2021. AI Ethics. 2022;2(1):157–65.

    Article  PubMed  Google Scholar 

  37. Venkatesh V, Thong JYL, Xu X. Consumer acceptance and use of information technology: extending the unified theory of acceptance and use of technology. MIS Q. 2012;36(1):157–78.

    Article  Google Scholar 

  38. Davar NF, Dewan MAA, Zhang X. AI chatbots in education: challenges and opportunities. Information. 2025;16(3):235.

    Article  Google Scholar 

  39. Sarrion E. The strengths and limitations of ChatGPT. In: Sarrion E, editor. ChatGPT for beginners: features, foundations, and applications. Berkeley, CA: A; 2023. pp. 477–84.

    Chapter  Google Scholar 

  40. van Wingerden E, Vacaru SV, Holstege L, Sterkenburg PS. Hey Google! Intelligent personal assistants and well-being in the context of disability during COVID-19. J Intellect Disabil Res. 2023;67(10):973–85.

    Article  PubMed  Google Scholar 

  41. Rockwell S. The FDP faculty burden survey. Res Manage Rev Spring. 2009;16(2):29–44.

    Google Scholar 

Download references

Acknowledgements

We sincerely thank all the faculty members who participated in this study. Your valuable insights and contributions were crucial to the success of this research.

Funding

The authors received no financial support for the publication of this article.

Author information

Authors and Affiliations

Authors

Contributions

Z.T.S. and R.A.E. led the study’s conceptualization and methodology, contributing significantly to the study design, data analysis, resource management, and drafting of the manuscript. They also supervised the project and managed its administration. M.R., M.S.A., B.N.A., and W.T.A. supported data validation, formal analysis, and manuscript review and editing. Additionally, M.A., R.A.J., T.F.A., K.M.A., and D.E.F. contributed to the study’s conceptual development and played key roles in drafting, reviewing, and refining the manuscript. Together, all authors ensured the manuscript’s scholarly rigor and quality.

Corresponding author

Correspondence to Khaled Mohammed Al-Sayaghi.

Ethics declarations

Ethics approval and consent to participate

Ethical approval for this study was obtained from the Institutional Review Board (IRB) at The University of Jordan (Approval #: 19/2023/773), Amman, Jordan. All methods were carried out in strict compliance with the relevant guidelines and regulations, including the principles of the Declaration of Helsinki. Informed consent was obtained from all participants prior to their participation in the study. Participants were thoroughly informed about the study’s purpose, procedures, and potential risks and benefits. They were assured of the confidentiality of their data, and all identifying information was removed from the manuscript and supplementary materials to ensure anonymity.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Saleh, Z.T., Rababa, M., Elshatarat, R.A. et al. Exploring faculty perceptions and concerns regarding artificial intelligence Chatbots in nursing education: potential benefits and limitations. BMC Nurs 24, 440 (2025). https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s12912-025-03082-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s12912-025-03082-0

Keywords