dentsu

thought leadership

There is no doubt that the launch of ChatGPT in November 2022 highlighted the many risks and opportunities of generative AI in the public consciousness, accelerating and amplifying conversations around its implications.  

Law makers around the world have found themselves forced to grapple with the technical, legal, societal, security and commercial questions posed by the rapidly evolving technology. Early out of the blocks - owing to the steps which were already in progress - is the EU, which will shortly implement its Artificial Intelligence Act.

The Act, which will apply directly within EU member states, focuses on the areas of AI use which, in the view of the EU, could present the greatest potential harm to the public and forces the owners and users of the technology to put in place suitable risk assessment and mitigation measures.

 It is expected to come into force before the end of 2023 with a 24-month transition period to allow for detailed guidance to be implemented and new regulatory bodies to be introduced.

What will it cover?

Under the Act, levels of risk posed by the use of AI in different contexts are articulated with prohibitions and obligations applying to the highest risk levels and to lesser degree to the lower ones. In simple terms, this means that the EU has looked at situations where the use of AI may result in the most severe negative impacts for individuals.

The use of AI for social scoring, certain biometric and emotional reading tools, tools for mass surveillance and the manipulation of behaviour causing harm is considered an unacceptable risk and subject to an outright ban.

Tools which are used to impersonate humans, such as chatbots, and deepfakes will be subject to transparency obligations[ATB1] [CN2] , meaning that end users should be made aware that AI technology is being used in delivering the service.

Use within educational settings, for the purpose of access to employment, within the safety components of vehicles or for any purpose which may have health and safety implications or impact on human rights presents a high level of risk and is therefore subject to control.

Providers of high-risk systems will be required to comply with registration obligations and put in place risk management and governance processes. This will include:

  • Ensuring effective human oversight on design and development.
  • Making sure training data is appropriate to its purpose.
  • Technical documentation demonstrating conformity is created and retained.
  • Applying a Conformité Européenne (CE) mark - demonstrating that the tool meets EU standards.
  • Breaches and incidents will be required to be notified.

Foundational models, including Generative AI, which do not stray into high-risk territory such as tools which might be used within commercial marketing practice will also be subject to governance requirements. These include risk assessments and data governance measures. Design will need to take into account the performance and energy efficiency of the models. Downstream providers will need to be supported with technical documentation to meet their own obligations under the Act and details of the models will need to be added to a EU-wide database.

What does this mean for EU businesses?

The transparent approach which the EU is mandating may have benefits for businesses looking to work with AI technology in terms of visibility on tools and systems under development. It will place EU based businesses working in this space on a level playing field and embed risk assessment practice, which will have benefits for individuals and consumers.  It remains to be seen whether the wider Global community follows suit or adopts different regulatory approaches, which could make it difficult for international business to navigate.

What about the UK?

As revealed at the November AI Safety Summit at Bletchley Park, the UK government has ambitions to put the UK at the centre of the AI revolution. It published a White Paper in March 2023. There are no proposals to create a single piece of totemic legislation similar to the EU Artificial Intelligence Act but instead to rely and build on existing legislation to create a regulatory framework for the development and use of AI. This has advantages in terms of agility but has the potential to create uncertainty too.

Key Takeaway

With AI the opportunities for creativity and process agility are huge. Clients will want to ensure that appropriate use cases are looked at, all parties involved are comfortable with any risks which may sit around it and that future engagement with AI is carried out responsibly and ethically. Dentsu shares those goals and looks forward to partnering with its clients on that journey.