Key Aspects of Spreading and Creating Disinformation Using Artificial Intelligence

Authors

DOI:

https://doi.org/10.33445/psssj.2026.7.1.1

Keywords:

Hybrid Warfare, Cognitive Warfare, Information Warfare, Cognitive Domain Operations, Artificial Intelligence

Abstract

The article examines the transformation of mechanisms for the creation and dissemination of disinformation under conditions of the active integration of artificial intelligence technologies into the information and communication space. It substantiates that the development of generative models, particularly large language models and deep learning systems, significantly increases the scale, speed, and persuasiveness of information and psychological influence. It is established that artificial intelligence not only automates the production of fake content but also enables its personalization, adaptation to the characteristics of target audiences, and integration into social media through bots and algorithmic systems.

The study analyzes key artificial intelligence tools used for disinformation (language models, deepfake technologies, and systems for voice and image synthesis), as well as their functional capabilities in the context of cognitive warfare. It is determined that the critical factors intensifying disinformation include the accessibility of technologies, reduced costs of information operations, and the phenomenon of “truth decay,” which erodes trust in all sources of information.

Based on the analysis of empirical studies, it is demonstrated that artificial intelligence -generated content can match or even surpass traditional propaganda in terms of persuasiveness. At the same time, a potential negative impact of AI on human cognitive abilities is identified, particularly a decline in critical thinking.

It is concluded that the use of artificial intelligence in disinformation constitutes a systemic threat to information security and requires the development of comprehensive interdisciplinary countermeasures, including legal regulation, technological solutions, and the enhancement of media literacy.

Downloads

Download data is not yet available.

References

Hanley, H. W. A., & Durumeric, Z. (2023). Machine-made media: Monitoring the mobilization of machine-generated articles on misinformation and mainstream news websites. arXiv. https://doi.org/10.48550/arXiv.2305.09820

Lee, S., et al. (2025). The impact of generative AI on critical thinking: Self-reported reductions in cognitive effort and confidence effects from a survey of knowledge workers. Microsoft Research. https://www.microsoft.com/en-us/research/uploads/prod/2025/01/lee_2025_ai_critical_thinking_survey.pdf

Tomz, M., Weeks, J. L. P., & Yarhi-Milo, K. (2024). How persuasive is AI-generated propaganda? PNAS Nexus, 3(2), pgae034. https://doi.org/10.1093/pnasnexus/pgae034

Tidy, J. (2024, April). Elon Musk’s X pushed a fake headline about Iran attacking Israel. X’s AI chatbot Grok made it up. Mashable. https://mashable.com/article/elon-musk-x-twitter-ai-chatbot-grok-fake-news-trending-explore

Wardle, C., & Derakhshan, H. (2017). Information disorder: Toward an interdisciplinary framework for research and policy making. Council of Europe. https://rm.coe.int/information-disorder-report/168076277c

Downloads


Abstract views: 62
Downloads: 19

Published

2026-03-31

How to Cite

Kin, O. (2026). Key Aspects of Spreading and Creating Disinformation Using Artificial Intelligence. Political Science and Security Studies Journal, 7(1), 1-8. https://doi.org/10.33445/psssj.2026.7.1.1

Issue

Section

Articles