
- Select a language for the TTS:
- UK English Female
- UK English Male
- US English Female
- US English Male
- Australian Female
- Australian Male
- Language selected: (auto detect) - EN
Play all audios:
This study explores the transformative role of generative artificial intelligence (AI) in shaping religious cognition, with particular emphasis on its implications for religious education.
By examining the biases inherent in AI-generated content, this research highlights how these biases influence user perceptions and interactions with diverse religious teachings. Through
experimental frameworks and pre/post-interaction evaluations, the study reveals that generative AI not only reflects but amplifies cognitive biases, affecting users’ understanding of
religious doctrines and cultural diversity. The findings underscore the potential of generative AI to act as a double-edged sword in religious education: enhancing personalized learning and
cross-cultural understanding while risking the reinforcement of prejudice. These insights call for ethical guidelines and oversight mechanisms in deploying generative AI within religious
contexts. This research contributes to the growing discourse on AI ethics and its pivotal role in shaping inclusive and unbiased religious education in the digital era.
Generative AI is rapidly emerging as a transformative tool in various domains, including religious education, by leveraging advancements in deep learning and natural language processing
technologies. This innovative technology enables the creation of diverse content such as text, images, and audio by analyzing vast datasets1,2. While its applications in personalized content
delivery are widely recognized, its implications for religious education have yet to be comprehensively explored. As religions represent both a foundational aspect of cultural identity and
a complex system of belief, the integration of generative AI offers both opportunities and challenges for educators and practitioners. By tailoring content to individual learners, AI can
enhance engagement and foster deeper understanding of religious doctrines and values.
Traditionally, religious education has relied on structured learning environments mediated by human educators, canonical texts, and experiential practices. However, the advent of the
internet and digital technologies has reshaped how religious knowledge is disseminated and consumed3. Digital platforms enable access to diverse religious teachings across geographical and
cultural boundaries, promoting inclusivity and cross-cultural understanding. Yet, with the introduction of generative AI, this dynamic has reached new levels of complexity. In religious
education, by generating personalized religious content that adapts to individual learners’ interests and feedback, AI helps foster a systematic understanding of religious beliefs and
practices4. Additionally, AI can clarify ambiguities in religious texts, enhancing their interpretability5, and further elucidate their cognitive, emotional, and social impacts6. AI also
encourages learners to engage in dialogue with respect and sensitivity, inspiring religious thought through the introduction of new metaphors and narratives7.
Nevertheless, these opportunities may also come with potential challenges. The reliance of generative AI on training data introduces the risk of cognitive bias, which can affect the accuracy
and impartiality of religious content. “Cognitive bias” refers to the systematic deviation from objective facts in an individual’s judgment due to inherent cognitive patterns or external
information influences, leading to irrational or skewed cognitive outcomes. Algorithmic bias, in a certain sense, contributes to this cognitive bias8. It includes selection bias,
confirmation bias, and measurement bias9. For example, confirmation bias causes individuals to seek and interpret information that confirms their pre-existing beliefs, which may reinforce
religious convictions10. On the surface, AI bias may seem to result from faulty algorithms, possibly influenced by data collection and filtering methods; however, a deeper cause lies in the
embedded biases of the development team11. If biases embedded in training datasets remain unaddressed, they may lead to skewed representations of religious doctrines, reinforcing stereotypes
and hindering interfaith understanding12. Whether intentional or not, biases in artificial intelligence directly impact the exercise of digital religion or the right to freedom of belief13.
Affective AI could significantly influence values such as privacy and autonomy14, with the ultimate goal of reshaping human behavior15. Such concerns are particularly pertinent in religious
education, where the aim is often to foster tolerance and mutual respect among diverse beliefs. This dual-edged nature of generative AI underscores the urgency of examining its application
within religious contexts, particularly its ability to shape cognition, attitudes, and societal perceptions toward religion.
This study aims to investigate the role of generative AI in religious education, focusing on three key research questions:
Bias in AI-Generated Religious Content: How do training data and algorithms shape the biases inherent in AI-generated religious content, and what are the implications for fairness and
objectivity?
Changes in Users’ Religious Cognition and Attitudes: How does engagement with AI-generated content influence users’ understanding of religious doctrines, and what mechanisms drive these
cognitive and attitudinal changes?
Impact on Religious Tolerance and Understanding: Can AI-generated religious content foster interfaith dialogue and tolerance, or does it exacerbate misunderstandings and social conflicts?
By addressing these questions, this research contributes to the growing discourse on the ethical and educational applications of generative AI, emphasizing its transformative potential in
religious education while cautioning against its risks. The findings offer a theoretical framework for integrating AI responsibly within educational and religious domains, ensuring
inclusivity, accuracy, and fairness.
Generative AI refers to a technology that creates new content by learning from and analyzing vast datasets through machine learning algorithms. Unlike traditional artificial intelligence,
generative AI can not only analyze existing data but also generate new content akin to human creation. The advancement of this technology has been driven by progress in sophisticated
algorithms, such as deep learning, natural language processing, and generative adversarial networks (GANs)16. These advances enable AI to generate various forms of content, including text,
images, and audio. The emergence of generative AI has expanded the application potential of artificial intelligence, particularly in fields such as education, healthcare, and media.
In the field of education, generative AI is progressively becoming a powerful tool for personalized learning. By analyzing data on students’ learning behaviors, generative AI can
automatically create tailored learning materials that align with each student’s proficiency level and needs. Additionally, generative AI is beneficial in language learning, where it can
generate dialogue exercises and provide real-time feedback to help students improve their language skills more effectively17. In religious education, AI can enhance students’ understanding
and appreciation of different religions by producing diverse instructional materials18. However, this application necessitates careful oversight by educators to ensure the generated content
is accurate and unbiased19.
In the medical field, generative AI is primarily applied in disease diagnosis, drug research and development, and personalized treatment. This technology enhances the accuracy and efficiency
of medical diagnoses while advancing personalized medicine, allowing patients to receive more precise and effective treatments20. Furthermore, generative AI aids in drug discovery by
simulating interactions between drug molecules and biological targets, thereby generating potential drug compounds. This approach significantly reduces both the time and cost associated with
drug development.
In the media industry, generative AI is utilized for generating news reports, writing articles, and producing video content. News organizations can leverage generative AI to create
personalized news content tailored to the needs of diverse audiences. By analyzing users’ reading habits and interests, AI can automatically generate content recommendations that align with
users’ preferences, thereby enhancing engagement and satisfaction. This personalized approach to content generation and distribution improves the efficiency and accuracy of information
dissemination. However, it also raises ethical concerns regarding the authenticity of information and the potential for content manipulation.
Through frequent interactions with users, generative artificial intelligence (AI) can effectively influence users’ cognitive frameworks and information processing methods21. During these
interactions, generative AI continuously adjusts the content it produces based on user feedback, thereby establishing a dynamic two-way interaction mechanism. This mechanism enables AI to
generate personalized and customized content aligned with users’ needs and interests, enhancing their sense of engagement and identification. As users interact with generative AI, they
gradually accept and internalize the AI-generated information, which in turn shapes their cognitive frameworks22. For example, when users frequently encounter AI-generated news or
information on a particular topic, they may develop fixed perceptions or attitudes toward that topic, potentially viewing these AI-generated contents as authoritative sources23.
By generating and delivering highly customized content, generative AI can gradually influence users’ cognitive preferences and attitudinal choices. Through the analysis of users’ behavioral
data and historical preferences, AI can produce content that closely aligns with users’ interests24. Continuous user feedback enables AI to optimize its content generation logic over time25.
This interactive mechanism allows generative AI to better understand user needs and deliver content that closely matches their preferences26. This mechanism is also applicable in the
context of religious information dissemination. By incorporating user feedback, AI can refine its content generation processes to more effectively convey religious teachings and beliefs27.
However, while personalized content generation enhances user experience, it also risks creating “information cocoons,” where users are only exposed to information that reinforces their
existing beliefs, thereby neglecting diversity and heterogeneity. This phenomenon can exacerbate cognitive biases and attitudinal polarization, making it difficult for users to encounter
alternative viewpoints. Consequently, this limits the breadth and depth of users’ cognition22.
The content generation capabilities of generative AI also affect the accuracy and authenticity of users’ cognition. Since AI generates content based on its training data, which may include
biased, inaccurate, or false information, the generated output can reflect these flaws. When users encounter such biased or misleading content, it may distort their understanding of reality.
Furthermore, if AI-generated content contains emotionally charged language, it can influence users’ emotional responses to specific topics, subtly altering their attitudes and behaviors24.
Additionally, the self-learning and optimization capabilities of generative AI may amplify its influence on users’ cognition. By continuously collecting user feedback, generative AI refines
its content generation strategies, enabling it to increasingly align with users’ cognitive needs. This iterative optimization process reinforces users’ existing cognitive frameworks and
attitudes, making AI’s influence more persistent and profound. Consequently, the ongoing refinement of AI-generated content can have long-term and far-reaching effects on users’ cognition
and attitudes28.
A notable challenge that generative AI faces in content generation is the presence of digital bias and associated ethical concerns. Since generative AI relies on large-scale data training,
this data often reflects the various biases and injustices present in the real world. These biases are particularly problematic when dealing with sensitive subjects such as religion, race,
and gender29. When these prejudices are embedded in AI models without identification or correction, generative AI can inherit and amplify these biases in its outputs, negatively impacting
users’ cognition and presenting serious ethical challenges.
The training data on which generative AI models depend frequently contains social biases from real-world contexts, spanning various aspects such as race, gender, religion, and culture. If
this data includes sexist or racist content, the AI may inadvertently reproduce or amplify these biases when generating text or images. This can result in discrimination and unfairness in
the generated content. Users may adopt the biases generated by AI without critical reflection30. Such issues are particularly sensitive concerning religious content, where AI-generated
outputs may reflect specific religious biases, thereby shaping users’ perceptions and attitudes toward different religions. This can exacerbate distrust and conflict between religious
groups31.
Biases in generative AI are not solely derived from training data; they may also stem from imbalances or biases in model design. Some AI models may favor content that aligns with the values
of mainstream culture or dominant groups, thereby marginalizing the perspectives of minority or underrepresented groups. This tendency reinforces existing social biases during information
dissemination and can further sideline the voices of vulnerable populations. The persistence of these biases not only undermines fairness in content generation but also poses a threat to
social justice and cultural diversity.
In addition to digital bias, generative AI presents numerous ethical challenges. Because generative AI can produce highly realistic content, distinguishing between AI-generated content and
human-created content can be difficult for users. This ambiguity heightens the risk of spreading misinformation, particularly in sensitive areas such as religion and politics, which can lead
to severe societal consequences. If AI-generated content is misused, it could manipulate public opinion, deepen social divisions, and even incite conflict32.
Furthermore, the opacity of generative AI models complicates accountability. The content generation process often operates as a “black-box,” making it difficult for users and regulators to
trace the specific reasons behind AI decisions. This lack of transparency obscures the attribution of responsibility when biased or unfair content is generated. Determining who should be
held accountable—whether AI developers, data providers, or users—remains a contentious issue. Currently, these questions lack definitive solutions and require further exploration through
technological advancements and the development of legal and regulatory frameworks33.
AI has opened new pathways for enhancing religious understanding and educational practices, helping to engage younger generations34. Users often turn to generative AI for religious guidance
and counseling, driven by personal emotional and spiritual uncertainties35,36, thereby creating interactive, inclusive, and adaptive experiences37, which can transform religious education.
In response to this trend, many religious institutions are exploring the integration of digital technologies, such as AI, into religious education, including reforms to Islamic curricula and
the introduction of AI-assisted teaching methods38. Generative AI can serve as a tool to support various religious teaching and learning processes39, assisting educators in designing more
effective teaching strategies and providing learners with more efficient learning methods37. When used ethically and responsibly, generative AI can also creatively integrate critical
thinking into religious education environments40 and even reshape religious customs and related behavioral norms41.
However, in religious education, AI-generated content based on religious narratives or doctrines can exploit cognitive biases, distorting individuals’ perceptions of religious truths42. This
raises concerns about the spread of algorithmic bias, which could exacerbate religious discrimination and inequality9. Despite existing research highlighting the risk of bias in generative
AI’s dissemination of religious content, the dynamics of how its generative mechanisms, influenced by user feedback loops, intensify bias and the differential impact of such bias on
religious tolerance remain underexplored43. This study attempts to explore this mechanism through experimental design and aims to identify actionable interventions for promoting inclusivity
and sustainability in religious education under the influence of digitalization.
As shown in Fig. 1, according to the interaction mechanism between generative artificial intelligence and users, we have drawn an analysis framework.
Analysis framework of the influence of generative AI on user cognition.
Concept construction involves setting and defining the goals and principles for generating religious content through Generative AI. This process must consider the sensitivity of religious
topics, ensuring that the AI-generated content reflects diversity and fairness while avoiding any form of religious bias. For instance, in this study, the AI system was designed with the
objective of generating content that remains as neutral as possible, providing objective descriptions of religious themes. However, insufficient attention to ethical considerations and
social responsibility during concept construction may lead to religious misunderstandings or conflicts during the dissemination of generated content.
Algorithm design is a core component in the generation of religious content by AI, determining how the system processes and analyzes input data. The models employed in this research include
advanced natural language processing frameworks, such as GPT-4, which generate context-specific religious content through large-scale data training. A critical aspect of algorithm design is
identifying and mitigating biases in the training data to prevent the amplification of these biases during content generation. Accordingly, adversarial training and bias detection techniques
were applied in this study to automatically identify and rectify potential religious biases within the model’s output. The primary objective of algorithm design is to ensure that the
generated content maintains neutrality in both semantic and emotional expression, avoiding undue favoritism towards any specific religious belief or cultural perspective.
Data input forms the foundation for generative AI content creation, directly impacting the fairness and accuracy of the generated output. The data sources utilized in this study encompass a
broad range, including religious texts, social media content, and news reports. These datasets influence how AI interprets and represents religious themes. During the data input phase,
rigorous data cleaning and preprocessing are essential to ensure diversity and impartiality. Analysis of the questionnaire data reveals that certain religions may be underrepresented in
AI-generated descriptions, reflecting biases inherent in the input data. Therefore, ensuring the fairness and representativeness of input data is paramount to mitigating prejudice during the
content generation process.
Content generation is the process by which generative AI transforms input data into specific religious information. During this process, AI models generate text or other forms of content
based on predefined algorithmic logic. The AI system used in this study, such as models based on GPT-4, can produce seemingly reasonable and coherent religious descriptions derived from
extensive training data. These descriptions may include interpretations of religious teachings, retellings of historical events, or portrayals of religious figures. However, because content
generation by generative AI relies heavily on training data and algorithmic configurations, the output may reflect hidden biases or tendencies. Research findings indicate notable differences
in the portrayal of certain religions within AI-generated descriptions, suggesting that data bias influences the content produced. Therefore, ensuring that AI-generated religious
information is fair, accurate, and ethically sound is paramount.
Interaction and feedback is a dynamic process in which generative AI collects user input and adjusts the generated content in real time. In the context of religious content generation, user
feedback helps the AI refine its output to better meet user needs. After interacting with AI-generated religious descriptions, users can provide feedback through multiple-choice questions,
ratings, or direct commentary. By analyzing this feedback, the AI system can adjust its content generation strategy in real time, ensuring subsequent outputs align more closely with users’
cognitive models and preferences. This feedback-driven adjustment mechanism enables AI to continuously optimize content generation and enhance user experience. However, this mechanism can
also lead to AI excessively reinforcing users’ existing cognitive biases, exacerbating the information cocoon effect. Research data reveal that users’ perceptions of certain religions
shifted after interacting with AI-generated content, reflecting the adjustments AI made based on user feedback. Consequently, the design of interaction and feedback mechanisms must balance
improving user experience with preserving content diversity and fairness.
After users are exposed to religious descriptions generated by AI, they typically express their satisfaction and understanding of the content through scoring, multiple-choice questions, or
open-ended feedback. These forms of feedback provide valuable data for evaluating the effectiveness and user acceptance of the generated content. Research findings indicate that in groups
where AI-generated content was introduced, users’ evaluations of different religions varied, underscoring the pivotal role of user feedback in content generation. By analyzing this feedback,
AI can identify issues such as bias or inaccuracies and make appropriate adjustments to improve content fairness and user recognition.
Generative AI dynamically shapes religious cognition through user feedback mechanisms, with its impact manifesting in two areas: cognitive restructuring and attitude polarization. On the
cognitive level, AI-generated religious content may expand users’ knowledge breadth. However, due to the implicit biases in the algorithm’s training data, it is prone to fostering one-sided
interpretations, such as misreadings of specific religious terms or the reinforcement of stereotypes. On the attitude level, AI-generated content, guided by emotional biases, may trigger
emotional polarization in users’ views toward specific religions, leading to a decline in positive evaluations or an increase in neutral attitudes44. Additionally, users’ trust in
AI-generated religious information can influence their attitudes toward traditional religious authority, potentially undermining it. For instance, the study found that in the AI-interference
group, users’ evaluations of Islam were notably lower compared to those in the non-interference group. This finding suggests that AI-generated content may reinforce biased or misleading
perceptions. Such cognitive and attitudinal changes often result from repeated exposure and feedback interactions, potentially exerting a long-term impact on users’ understanding.
The bidirectional feedback loop between generative AI and users is fundamentally a dynamic interaction of “data feeding-content generation-feedback reinforcement.” AI continuously optimizes
the generated content based on users’ religious preferences and feedback, while users gradually internalize the implicit values or biases embedded in the AI-generated religious information
through repeated exposure. In this loop, AI may inadvertently amplify religious stereotypes present in the training data, while users’ positive feedback further solidifies algorithmic biases
and cognitive distortions. This ultimately leads to the echo chamber effect, diminishing the diversity and objectivity of religious understanding. In the study, comparisons between the
AI-interference group and the non-interference group revealed the significant role of this two-way circulation mechanism in religious content generation. This mechanism not only drives the
continuous improvement of AI but is also crucial for bias control.
In this study, an experimental design was employed to assess respondents’ religious cognition through a questionnaire survey. The experiment consisted of two stages: the baseline measurement
stage and the intervention stage.
This study employed stratified random sampling, recruiting 1,005 participants through an online platform, covering 12 geographic regions (e.g., East Asia, Southeast Asia, Middle East), with
sample sizes allocated according to regional population proportions. The sampling process included geographic stratification (based on World Bank regional classifications) and random
selection within each stratum, ensuring balance between the intervention group (502 participants) and the control group (503 participants) across dimensions such as nationality, religious
affiliation, gender, age, and educational background. The sample represents diverse religious backgrounds, with primary information sources being online media (38.2%) and social
organizations (23.8%), ensuring data uniqueness and ethical compliance.
In the baseline measurement stage, all respondents completed a questionnaire assessing their initial attitudes and cognitive perceptions of various religions before being exposed to
generative AI content. This baseline data served as a benchmark for subsequent analysis.
In the intervention stage, respondents were randomly assigned to either the AI-interference group or the non-AI-interference group. In the AI-interference group, participants read religious
descriptions generated by generative AI, which contained positive, negative, or neutral views of certain religions. Afterward, they completed the questionnaire again to measure changes in
their religious cognition. In contrast, the non-AI-interference group read traditional religious descriptions without AI-generated content and subsequently filled out the same questionnaire.
The core objective of this experimental design was to compare the cognitive and attitudinal changes between the two groups before and after exposure to generative AI content. These
comparisons aimed to uncover how generative AI influences users’ perceptions of different religions and how potential biases in AI-generated content affect these perceptions.
To ensure the reliability of the results, the study adopted a random assignment method, dividing 1,005 respondents into two groups: 502 participants in the AI-interference group and 503 in
the non-AI-interference group. This design maintained the uniqueness of variables between the control and experimental groups, ensuring that the only differentiating factor was exposure to
AI-generated content.
The questionnaire evaluated multiple dimensions of religious cognition, including overall impressions of religion, tolerance of doctrine, morality and ethics, societal and lifestyle views,
the image of believers, and perceptions of modernity. Responses were recorded using a 5-point Likert scale (ranging from 1 = “strong disapproval” to 5 = “complete approval”), enabling the
quantification of respondents’ attitudes toward various religions.
The design and implementation of this study underwent rigorous ethical review. The research plan, detailing the experimental design, participant recruitment, data collection, and analysis
methods, was submitted to and approved by the Ethics Committee. Particular emphasis was placed on ethical considerations related to the sensitive nature of religious cognition. The Ethics
Committee evaluated potential risks, such as the psychological impact of AI-generated content or the potential for reinforcing religious biases. The research team implemented measures to
mitigate these risks. Participants were fully informed about the purpose and procedures of the study, their rights as participants, and the voluntary nature of their involvement. Consent was
obtained prior to completing the questionnaire, with assurances that all data would remain confidential and no personal information would be collected or disclosed. All methods employed in
this study were conducted in strict accordance with relevant guidelines and regulations, and necessary approvals and authorizations were obtained where required.
In the baseline measurement stage, all respondents completed a questionnaire to assess their initial cognition and attitudes toward various religions. This initial questionnaire provided
benchmark data for subsequent analysis. In the intervention stage, respondents in the experimental group completed the questionnaire again after being exposed to religious descriptions
generated by generative AI. These AI-generated descriptions encompassed positive, negative, and neutral views, allowing for an evaluation of the impact of AI-generated content on religious
cognition. The questionnaire design also included open-ended questions to enable respondents to articulate their personal views on the religious descriptions (See Fig. 2).
The first stage of the experiment involved a baseline measurement to capture respondents’ religious cognition and attitudes before exposure to generative AI-generated content. A total of
1,005 respondents completed a questionnaire assessing various dimensions of religious perception, including overall impressions of different religions, tolerance of doctrines, views on
morality and ethics, societal and lifestyle beliefs, perceptions of believers, and attitudes toward modernity. Responses were recorded on a 5-point Likert scale (1 = “strong disapproval” to
5 = “complete approval”). This stage provided a comprehensive benchmark for comparing subsequent cognitive and attitudinal changes.
Following the baseline measurement, respondents were randomly assigned to one of two groups: the AI-interference group and the non-AI-interference group. In the AI-interference group,
participants read religious descriptions generated by generative AI, which included content with positive, negative, and neutral perspectives on various religions. These descriptions
simulated the diverse outputs that AI systems might produce in real-world applications. Respondents in the non-AI-interference group read traditional religious descriptions that did not
involve AI-generated content.
To ensure that responses were not influenced by external factors, participants were unaware of the source of the content they read. This design allowed for an unbiased evaluation of the
unique impact of generative AI content on religious cognition. It is important to note that, in order to control variables, we simulated only a single cycle of interaction between humans and
AI, without conducting repeated dialogues. As a result, the study may not effectively validate the reciprocal shaping and feedback mechanisms between AI and humans.
After the intervention, all respondents completed the same questionnaire again to assess any cognitive and attitudinal changes resulting from exposure to generative AI content. This
post-intervention survey evaluated the same dimensions as the baseline measurement, including overall impressions of different religions and attitudes toward doctrines, morality, society,
and modernity. Additionally, the questionnaire included open-ended questions that allowed respondents to provide detailed feedback and specific reactions to the religious descriptions they
encountered.
This multi-stage design facilitated a rigorous comparison of changes between the AI-interference and non-AI-interference groups, revealing the potential influence of generative AI on
religious cognition and attitudes.
AI content generation relies on a diverse range of data sources, including religious texts, social media content, and news reports. These data have undergone preprocessing steps such as
noise removal, standardization, and de-biasing. The dataset encompasses major world religions, including Christianity, Islam, Buddhism, Hinduism, and Judaism, covering aspects such as
religious teachings, historical events, contemporary religious practices, and societal views.
The model employed in this study is based on advanced natural language processing technology, specifically GPT-4. ChatGPT was selected as the experimental tool for this study based on
several considerations. The focus of this research is to explore the impact of the interaction mechanisms between generative AI and users on religious cognition, rather than comprehensively
evaluating the performance differences of all AI models. ChatGPT (GPT-4) was chosen as the experimental tool because it is currently the most widely used general model with the strongest
interactivity for generating religious content, and its underlying architecture represents the mainstream technological approach of generative AI45. Furthermore, this model is known for its
cultural biases, with training data that includes multilingual religious texts. Several studies have confirmed that its output exhibits systemic cultural skew, such as the 1.7 times higher
semantic association between Islam and “violence” compared to other religions46. During the content generation process, the AI system produces three types of religious descriptions:
positive, negative, and neutral. Positive descriptions highlight the beneficial influence and social contributions of religion, negative descriptions address religious conflicts or adverse
events associated with religious practices, and neutral descriptions aim to remain objective, offering factual religious information.
To ensure the accuracy and rationality of the generated content, the religious descriptions produced by the AI were subjected to multiple rounds of review before being used in the
experiment. During the implementation phase, respondents in the AI-interference group were presented with the AI-generated content in text form. Each participant randomly read one piece of
AI-generated content, which was selected based on their religious background or interests. This approach aimed to assess the impact of different AI-generated descriptions on respondents’
religious cognition. The research data indicates that, within the AI-interference group, content with varying tendencies influenced participants’ religious cognition, further substantiating
the potential role of AI-generated content in shaping users’ attitudes.
The collected data from the 1005 questionnaires were thoroughly cleaned and reviewed, focusing on missing values, outliers, and inconsistent responses. For missing values, both deletion and
interpolation methods were employed to maintain the integrity of the dataset. Outliers were identified by establishing reasonable value ranges and were either corrected or removed as
necessary. All questionnaire responses were converted into numerical codes, with responses on the 5-point Likert scale being coded from 1 to 5 for ease of subsequent statistical analysis.
Additionally, demographic information (e.g., gender, age, nationality, and religious affiliation) was classified and coded to account for potential confounding variables in the analysis. To
ensure comparability across variables, all data were standardized. The reliability test results for the research questionnaire are presented in Table 1, with the Cronbach’s alpha for all
religious categories exceeding 0.7. This indicates that the questionnaire demonstrates good reliability and validity.
For validity evaluation, KMO value measurement method is adopted. The closer the KMO value is to 1, the higher the validity of the questionnaire is. As shown in Table 2, the overall KMO
value of the questionnaire is 0.973, and the chi-square value of the bartlett sphericity test is 9579.538, which is lower than 0.05, indicating that the questionnaire has good validity.
A total of 1005 valid questionnaires were collected, with respondents representing diverse geographical regions and cultural backgrounds, including East Asia, Southeast Asia, North America,
Oceania, the Middle East, and North Africa. The sample varied in terms of gender, age, education, and religious affiliation. During the recruitment process, questionnaires were distributed
through online survey platforms and social media to ensure broad representation across different demographic groups. The majority of respondents were aged between 25 and 34 years, with the
predominant education level being undergraduate or junior college (See Table 2; Fig. 3). The religious backgrounds of the participants include Buddhism, Christianity, Islam, other religions,
as well as individuals with no religious affiliation.
In this study, the collected questionnaire data were analyzed using descriptive statistics. Basic statistics, including the mean, standard deviation, minimum value, and maximum value of each
index, were calculated to provide a preliminary understanding of the distribution of respondents’ religious cognition and attitudes. Based on a 5-point Likert scale, the average religious
scores across different dimensions (such as overall impression, religious tolerance, morality and ethics, etc.) were computed. For clarity, we combined the seven cognitive measures related
to each religion (including religious history, doctrinal tolerance, religious ethics, social influence, impression of followers, religious modernity, and overall impression) from the
questionnaire. The average score for each religion’s cognitive status was then calculated and used for comparative analysis, as shown in Table 3.
We conducted a correlation analysis to examine the relationship between respondents’ religious cognition and various background factors, including nationality, gender, educational
background, religion, and social context. However, after verification, no significant correlation was found between these background factors and religious cognition. To further assess the
impact of generative artificial intelligence on different groups, an independent samples t-test was employed. This method compares religious cognition between the group exposed to
AI-generated content and the group without AI interference. The analysis determines whether generative AI content leads to cognitive differences between the two groups.
To enhance the persuasiveness of this study regarding potential biases in AI-generated texts, we compared texts from the experimental group (AI-generated) and the control group
(human-generated) through content analysis. A set of human-written religious texts was selected to match the AI-generated texts in theme, length, and information richness. We then applied
Thematic Analysis and Sentiment Analysis, focusing on Sentiment Polarity, Stereotype & Association, and Information Diversity.
Using the Bias Benchmarking Framework, we identified biases in categories such as Cultural Bias, Sentiment Bias, and Omission Bias through a combination of human annotation and
computer-assisted analysis. The results showed that AI-generated texts exhibited more significant biases in sentiment and stereotypes compared to human-written texts. For example,
AI-generated texts on Islam contained 1.5 times more references to “conflict,” while Christian texts featured more positive terms like “love” and “forgiveness.” These findings support the
hypothesis that generative AI in religious content creation can induce cognitive biases, influencing users’ attitudes.
For Christianity, the group without AI interference had an average evaluation score of 3.4, while the AI-interfered group scored 3.71. The presence of AI interference resulted in a increase
of 0.31 points in the evaluation of Christianity (t = − 7.197, p