Workshop on Fundamental Rights Impact Assessment (FRIA), 13 June 2025, Zagreb

In an era increasingly shaped by artificial intelligence, a key question emerges: how can we ensure that technology serves humanity—rather than the other way around? With this in mind, the Croatian Personal Data Protection Agency (AZOP) organized a free workshop on 13 June 2025, dedicated to assessing the impact of high-risk AI systems on fundamental rights, in accordance with Article 27 of the Artificial Intelligence Act.

“With the entry into force of the Artificial Intelligence Act, the General Data Protection Regulation (GDPR) retains, indeed, strengthens its importance. The AI Act underscores the vital role of data protection, not as a barrier to innovation, but as its enabler, providing a secure and trustworthy foundation for the development of AI that benefits society while minimizing risks,” said Zdravko Vukić, Director of AZOP and Vice-Chair of the European Data Protection Board, in his opening address. He emphasized that the Agency has been designated as one of the competent authorities for monitoring and enforcing compliance with obligations related to the protection of fundamental rights in the context of high-risk AI systems. “When it comes to the intersection of AI and personal data processing, the Agency will not hesitate to exercise its new powers where necessary to safeguard individuals’ rights,” he added.

As part of the workshop, Anamarija Mladinić, Head of the Sector for EU, International Cooperation and Legal Affairs, delivered a presentation titled “The AI Act: A Risk-Based Approach and Its Interplay with the GDPR”. Her presentation offered a comprehensive overview of the regulatory framework and its key obligations, including the requirement under Article 27 to carry out a fundamental rights impact assessment (FRIA) before deploying high-risk AI systems. This requirement aims to protect individuals from discrimination, unfair automated decisions, violation of privacy and other fundamental rights.

 Participants were introduced to the FRIA methodology developed by Professor Alessandro Mantelero of the Politecnico di Torino, in cooperation with various partners and the Catalan Data Protection Authority. Professor Mantelero also presented real-world case studies illustrating how the methodology has been applied in practice, stressing the importance of conducting such assessments early in the AI development process to effectively identify and mitigate risks.

The FRIA methodology serves as a practical tool for identifying potential risks and defining appropriate mitigation measures to ensure the protection of fundamental rights. Its use is strongly recommended for all organizations developing or deploying high-risk AI systems, whether in the public or private sector. Early adoption of FRIA promotes ethical and responsible innovation while ensuring respect for fundamental rights throughout the AI lifecycle.

The workshop also featured case study presentations by Stefan Martinić (Law Office Stefan Martinić), Dr. Natalija Parlov Una (Apicura CERT), Ivan Ivanković (TrustPath), and Marko Đuričić (Visage Technologies). These contributions facilitated knowledge-sharing and illustrated diverse approaches to implementing the FRIA model across different sectors.

The event concluded with interactive group exercises, allowing participants to apply the FRIA methodology under expert mentorship. This hands-on experience significantly enhanced participants’ understanding and readiness to apply fundamental rights assessments in their own organizational contexts.

A

Related

X
Skip to content