
The Council of Europe has announced two new initiatives designed to evaluate the risks and impacts of AI in the areas of human rights, democracy and the rule of law.
Both initiatives are based on the <span class="news-text_medium">HUDERIA methodology</span> (Human Rights, Democracy and the Rule of Law Impact Assessment for AI), which provides a structured approach to assessing AI systems across their lifecycle. HUDERIA was developed to help protect the public from emerging risks, requiring organisations to prepare risk mitigation plans and conduct regular reassessments to ensure ongoing compliance with human rights and safety standards.
Use of the methodology is voluntary, but organisations are encouraged to apply it when developing and deploying AI systems, in order to evaluate their impact within their wider social context. According to the Council of Europe, HUDERIA will play a key role in supporting implementation of the Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law, recently adopted to guide responsible AI governance across member states.
For organisations developing or using AI, HUDERIA offers a practical tool for risk management and compliance. By adopting the methodology, businesses can demonstrate due diligence in assessing AI systems against human rights, democratic values and the rule of law. Governments and regulators may also view its use as evidence of best practice, even though it is voluntary. Over time, applying HUDERIA could become a benchmark standard for trustworthy AI deployment in Europe and beyond.