Beyond the Illusion: Evaluating the Trustworthiness of AI-Generated Information
Students & Supervisors
Student Authors
Supervisors
Abstract
Artificial intelligence tools such as Chat GPT, Deep fake technology, DALL·E are definitely changing the way digital content gets created. The possibility to boost creativity and productivity using this kind of tool is great. However, there’s a serious concern regarding the credibility and authenticity of the information it produces. The research examines the intensification of misinformation risks in low resources and developing regions due to generative AI. The goal of the paper is to establish an AI detection and human oversight framework for misleading content identification. The qualitative research that is being used is a case study and expert interview. We looked at some excellent examples of misinformation generated by AI. These included deepfake videos and fake news stories. In addition, the researchers conducted semi-structured interviews with AI researchers, librarians, and digital policy experts to gain insight into challenges in content verification and governance. Reviewing open-source AI models provided insights into a range of recurrent issues, including bias and errors. This issue is especially relevant in countries like Bangladesh, where a lack of digital skills and access to verification systems is a concern. Generative AI can produce content that looks authentic, making it difficult for people to tell it is AI-generated. Users in lower information literacy environments are more likely to trust the veracity of AI-generated content without questioning it, making the problem particularly severe in these environments. Just like misinformation spreads rapidly because of the unavailability of good content detection tools and low public awareness. Although generative AI can assist with learning or content production, usually the lack of visible reliability indicators makes it hard for everyday users to apply critical thinking. We clearly need to reassess how people access and interact with AI-generated content. The research paper highlights the need for integrating information literacy in digital education so that users can evaluate what they see online. It also presents a model of coexistence between human and AI wherein the AI will flag content it thinks is misleading, trained professionals or validators from the community can step in to provide context and verification. Researchers recommend policymakers push for transparency from AI developers and spend on tools that help the public verify the information. The educational sector, particularly libraries and universities, should do more to encourage people to become literate and aware about AI. The limitation of this study was constrained access to internal data of commercial AI platforms making it difficult to conduct more in-depth technical analysis of the models. It is also based on qualitative data, which limits wider statistical generalization. Additionally, due to the fast-pace of generative AI technologies, the research findings will need to be updated over time to remain relevant.
Keywords
Publication Details
- Type of Publication:
- Conference Name: The 4th International Conference on Information and Knowledge Management
- Date of Conference: 16/09/2025 - 16/09/2025
- Venue: East West University, Dhaka, Bangladesh
- Organizer: East West University, Dhaka, Bangladesh