RESEARCH INTO ARTIFICIAL INTELLIGENCE VULNERABILITIES AND BUILDING A COMPREHENSIVE MODEL OF ORGANIZATIONAL SECURITY
DOI: 10.31673/2409-7292.2025.018929
DOI:
https://doi.org/10.31673/2409-7292.2025.018929Abstract
The rapid development of artificial intelligence technologies is accompanied by an increase in the number of cyber
threats that threaten the confidentiality, integrity and security of AI systems (Artificial Intelligence). The implementation of
regulatory requirements, in particular the European Union's AI Act, obliges organizations involved in the development and
deployment of AI models to adhere to high standards of cybersecurity and effective risk management. This study classifies key
vulnerabilities in artificial intelligence, assesses their impact on the security of AI systems, and proposes a multi-layered
architecture for protecting AI infrastructure. The threat analysis includes the study of attacks on training data, including data
poisoning, when attackers modify the training set to change the behavior of the model. Attacks directed at the model itself, such
as adversarial attacks, which allow the manipulation of the model's output data by introducing specially selected values, are also
considered. The study covers attacks on user inputs, including prompt injection and jailbreaking, which are used to bypass
established restrictions and obtain unwanted model behavior. In addition, privacy violations are considered, including model
inversion and membership inference attacks, which allow attackers to recover or discover data used during model training.
Particular attention is paid to the risks of bias in artificial intelligence algorithms, in particular, manifestations of bias in AI,
which can lead to discriminatory results due to unrepresentative or distorted training samples. Based on the analysis, the article
proposes a multi-layered security architecture that helps reduce the risks of compromising AI models and infrastructure. In
particular, it considers mechanisms for assessing the impact of the EU AI Act on the security of organizations, including an
analysis of potential fines, obligations and compliance measures for AI-oriented companies. Special emphasis is placed on
protecting AI infrastructure in cloud environments (AWS, Azure, GCP) by implementing data encryption methods, environment
isolation, limiting access to models, and countering API attacks. To ensure reliability and security, it is proposed to implement
threat monitoring and detection systems, in particular, using tools such as Arize AI and Aporia to detect anomalies in model
behavior, LIME and SHAP for explainability of AI solutions, as well as AWS GuardDuty, Azure Defender, and Google SCC
to monitor cyber threats in cloud infrastructure. The results of this study can be used to develop effective methods for protecting
AI systems, increasing their resistance to attacks, and creating a reliable and secure AI infrastructure that meets modern
cybersecurity challenges and regulatory standards.
Keywords: Artificial Intelligence (AI), AI Act EU, AI security, AI vulnerabilities, automated deployment of AI
infrastructure, data protection, cybersecurity compliance, AI risk management.
References
1. Trazzi, Michaël & Yampolskiy, Roman. (2018). Building Safer AGI by introducing Artificial Stupidity.
10.48550/arXiv.1808.03644.
2. Soprana, Marta. (2024). Compatibility of emerging AI regulation with GATS and TBT: the EU Artificial
Intelligence Act. Journal of International Economic Law. 27. 10.1093/jiel/jgae040.
3. Bangura, Gabriel. (2024). The ЄС ШІ Акт - Mitigating Discrimination In Artificial Intelligence Systems.
10.13140/RG.2.2.27020.63367.
4. Matai, Puneet. (2024). Comprehensive Guide to AI Regulations: Analyzing the ЄС ШІ Акт and Global
Initiatives. International Journal of Computing and Engineering. 6. 45-54. 10.47941/ijce.2110.
5. Molnar, David. (2024). AI unleashed: mastering the maze of the ЄС ШІ Акт. 10.56461/iup_rlrc.2024.5.ch12.
6. EU Artificial Intelligence Act | Up-to-date developments and analyses of the EU AI Act. (n.d.).
https://artificialintelligenceact.eu/
7. Скіцько, Олексій & Ширшов, Роман. (2024). НОРМАТИВНО-ПРАВОВЕ ЗАБЕЗПЕЧЕННЯ
КІБЕРБЕЗПЕКИ ОБ’ЄКТІВ КРИТИЧНОЇ ІНФРАСТРУКТУРИ. Науковий вісник Львівського державного
університету внутрішніх справ (серія юридична). 73-79. 10.32782/2311-8040/2024-2-11.
8. Міжнародні стандарти регулювання штучного інтелекту: аналіз актів, розроблених за результатами
Хіросімського процесу з ШІ. ЮРЛІГА. https://jurliga.ligazakon.net/. (2024, February 6).
9. Pochu, Sandeep & Nersu, Sai & Kathram, Srikanth. (2024). AI-Powered Monitoring: Next-Generation
Observability Solutions for Cloud Infrastructure. Journal of AI-Powered Medical Innovations (International online ISSN
3078-1930). 2. 140-152. 10.60087/Japmi.Vol.02.Issue.01.Id.010.
10. Sign, Ghader. (2024). Data-Driven AI Models for Cybersecurity: Optimizing Data Pipelines and Infrastructure
Protection in a Cloud-First World. 10.13140/RG.2.2.30481.44642.
11. Marinova, Miroslava. (2024). Balancing Innovation and Regulation: Evaluation of the CMA’s Report on AI
Foundation Models and their impact on competition and consumer protection.
12. Ruschemeier, Hannah. (2025). Generative AI and data protection. Cambridge Forum on AI: Law and
Governance. 1. 10.1017/cfl.2024.2.
13. Nguyen, Phan & Quang, Nguyen. (2025). Copyright protection for AI-generated works: A comparative review
of international and Vietnamese laws. Arts & Communication. 3745. 10.36922/ac.3745.
14. Pan, Qianqian & Dong, Mianxiong & Ota, Kaoru & Wu, Jun. (2022). Device-Bind Key-Storageless Hardware
AI Model IP Protection: A PUF and Permute-Diffusion Encryption-Enabled Approach. 10.48550/arXiv.2212.11133.
15. Martseniuk Y., Partyka A., Harasymchuk O., Shevchenko*** S. Universal centralized secret data management
for automated public cloud provisioning // CEUR Workshop Proceedings. – 2024. – Vol. 3826 : Proceedings of the
workshop "Cybersecurity providing in information and telecommunication systems II", Kyiv, Ukraine, October 26, 2024
(online).. – P. 72–81.
16. Nagar, Mayura. (2025). From Data to Sustainability: AI Case Studies in Shaping Sustainable Landscapes.
10.4018/979-8-3693-3410-2.ch008.
17. Malik, Shoaib. (2024). The Future of AI in Biometric Security: Enhancing Authentication and Privacy
Protection. 10.13140/RG.2.2.34306.59844.
18. Lee, Giljae. (2024). Personal Data Protection Issues in the Era of Artificial Intelligence. Journal of Medical
Imaging. 7. 13-18. 10.31916/sjmi2024-01-03.
19. Aslam, Umair & Amelia, Oscar. (2022). Exploring the Influence of Data Protection Laws on AI Development
and Ethical Use. 10.13140/RG.2.2.28372.41603.
20. Katrakazas, Panagiotis & Papastergiou, Spyros. (2024). A Stakeholder Needs Analysis in Cybersecurity: A
Systemic Approach to Enhancing Digital Infrastructure Resilience. Businesses. 4. 225-240. 10.3390/businesses4020015.
21. Hindle, Andrew. (2020). Impact of GDPR on Identity and Access Management. IDPro Body of Knowledge.
1. 10.55621/idpro.24.
22. Martseniuk Y., Partyka A., Harasymchuk O., Korshun*** N. Automated conformity verification concept for
cloud security // CEUR Workshop Proceedings. – 2024. – Vol. 3654 : Cybersecurity providing in information and
telecommunication systems 2024. Proceedings of the workshop cybersecurity providing in information and
telecommunication systems (CPITS 2024) Kyiv, Ukraine, February 28, 2024 (online).. – P. 25–37.–
23. Марценюк Є. В., Партика А. І. Аналіз впливу тіньових ІТ на інфраструктуру хмарних середовищ
підприємства // Безпека інформації. – 2024. – Т. 30, № 2. – С. 270–278
24. Ashraf, Nadeem & Badi, Sadi. (2024). Integrating AI for Monitoring and Compliance: Tackling Climate
Change in the Oil and Gas Industry give. 10.13140/RG.2.2.19022.78403.
25. Olasehinde, Tolamise & Jason, Frank. (2024). INTEGRATING AI AND MACHINE LEARNING FOR
COMPLIANCE MONITORING IN DATA LAKES.
26. Khan, Umar & Aggarwal, D & Muslim, & Sohrab. (2024). Analyzing the Role of Artificial Intelligence (AI)
in Monitoring Corporate Governance Practices and Ensuring Compliances in Improved Decision-Making Processes. 11.
3025-3031.
27. Dimitrijević, Nikola & Zdravković, Nemanja & Bogdanović, Milena & Mesterovic, Aleksandar. (2024).
Advanced Security Mechanisms in the Spring Framework: JWT, OAuth, LDAP and Keycloak.