New International Treaty on AI Signed – Your Front Page For Information Governance News
In September the UK, EU, and US signed the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (AI Convention). It is the world’s first AI treaty including provisions to protect the public and their data, human rights, democracy and the rule of law.
The Convention requires signatory countries to monitor the development of AI and ensure any technology using AI is managed within strict parameters. It also commits countries to act against activities which fall outside of these parameters and to tackle the misuse of AI models which pose a risk to public services and the wider public.
The Convention sets out 3 over-arching safeguards:
- protecting human rights, including ensuring people’s data is used appropriately, their privacy is respected and AI does not discriminate against them
- protecting democracy by ensuring countries take steps to prevent public institutions and processes being undermined
- protecting the rule of law, by putting the onus on signatory countries to regulate AI-specific risks, protect its citizens from potential harms and ensure it is used safely
The Convention does not apply directly; legislators in each jurisdiction have to implement it into their domestic law and there is a wide degree of freedom over how it is interpreted and applied. The European Commission has said the Convention will be implemented in the EU via the recently enacted EU AI Act which will become enforceable in stages over the next few years.
The UK Position
The UK has no AI regulation (yet). Despite media reports, the recent King’s Speech did not include a bill to regulate AI. The King said that the government would “seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models”. We expect a government consultation to be announced soon. However, it is likely that new AI requirements will be introduced in other forthcoming legislation e.g. the Product Safety and Metrology Bill. The published summary of this bill states that it aims to “support growth, provide regulatory stability, and deliver greater protection for consumers by addressing new product risks and opportunities, allowing the UK to keep pace with technological advances such as AI.” Managing AI in the context of product safety aligns with certain aspects of the EU AI Act.
When an AI Bill does finally appear, it is likely to focus on the production of large language models (LLMs), the general-purpose technology that underpins AI products such as OpenAI’s ChatGPT and Microsoft’s Copilot. As the Labour election manifesto stated:
“Labour will ensure the safe development and use of AI models by introducing binding regulation on the handful of companies developing the most powerful AI models and by banning the creation of sexually explicit deepfakes.”
Whatever shape the UK’s AI regulation takes, the government will have to ensure that the AI Convention is implemented. Shabana Mahmood, Lord Chancellor and Justice Secretary, said:
“Artificial intelligence has the capacity to radically improve the responsiveness and effectiveness of public services, and turbocharge economic growth. However, we must not let AI shape us – we must shape AI. This convention is a major step to ensuring that these new technologies can be harnessed without eroding our oldest values, like human rights and the rule of law.”
If you are a DPO needing to stay abreast of the latest developments and best practices in AI implementation, join our Artificial Intelligence and Machine Learning, How to Implement Good Information Governance workshop.
Source link