The Artificial Intelligence (AI) Act, the world’s first-ever comprehensive legal framework to tackle AI risk, came into force on 1st August (today).
The framework forms part of a wider package of policy measures to support the development and trustworthy AI, and also includes the AI Innovation Package and the Coordinated Plan on AI.
Its goal is to guarantee the safety and fundamental rights of people and businesses when it comes to AI while also strengthening uptake, investment and innovation across the EU. Simultaneously, it ensures that AI systems on the market respect ethical principles by addressing the risks of powerful and impactful AI models.
This act specifically targets providers and developers of AI systems that are marketed or used within the EU.
This also includes free-to-use AI technology, and applies to providers and developers, irrespective of whether they are established in the EU or another country.
For instance, if an American-based company is providing AI-based technology within the EU, it is still subject to penalties if found non-compliant.
The legal framework will be providing clear requirements and obligations regarding to specific uses of AI. Simultaneously, it seeks to reduce administrative and financial burdens for business, particularly for small and medium-sized enterprises (SMEs).
The EU has set up an AI Office, which will play a key role in implementing the act by supporting the governance bodies in Member States in their tasks.
The office aims to enforce regulations for general-purpose AI models, supported by the powers granted to the Commission by the AI Act.
These powers include conducting evaluations of general-purpose AI models, requesting information and measures from model providers, and applying sanctions.
Furthermore, the AI Office fosters an innovative ecosystem of trustworthy AI to maximise societal and economic benefits. This ensures a strategic, coherent, and effective European approach to AI on a global scale, positioning itself as a global reference point.
Non-compliance with the act will lead to hefty penalties ranging from €35 million or seven per cent of global revenue to €7.5 million or 1.5 per cent of revenue. The penalty will range based on the infringement and size of the company.
How does it work?
In general, the AI Act will:
Based on assessment, all AI systems considered as a threat to the safety, livelihoods and rights of people are banned. This could range from social scoring by governments to toys using voice assistance that encourages dangerous behaviour.
The act defines four levels of risk for AI systems: Minimal risk, limited risk, high risk, and unacceptable risk.
High risk AI includes tech used in: Critical infrastructure such as transport that can put the life and health of citizens at risk, safety component products such as an AI application in robot-assisted surgery and employment, including CV-sorting software for recruitment procedures, among others
All high-risk AI systems must meet a strict set of obligations prior to being put on the market. This includes adequate risk assessment and mitigation systems, clear and adequate information to the deployer, appropriate oversight measures to minimise risk and a high level of robustness, security and accuracy, among others.
Another example of high-risk technology includes remote biometric identification systems, used in facial and fingerprint recognition. The act prohibits the use of remote biometric identification in publicly accessible spaces for law enforcement purposes.
There are some exemptions which are permitted by the law, yet are still strictly regulated. These include instances where a child is missing or when there is a specific and imminent terrorist threat that must be prevented.
On the other hand, limited risk AI technology refers to risk associated with lack of transparency and so the act introduces specific obligations to ensure that users are informed when necessary.
For instance, when a company makes use of certain AI systems such as chatbots, humans must be made aware that they are interacting with a machine, fostering more trust. In addition, under this law, AI-generated text published for public interest must be labelled as AI-generated. This applies to audio-visual content, including deepfakes.
Although the act entered into force on Thursday (today), it will be fully applicable in two years’ time, with the exception of a few instances.
Prohibitions are set to take effect after six months, the governance rules and the obligations for general-purpose AI models will become applicable after 13 months and the rules for AI systems, embedded into regulated products, will apply after 36 months.
To ensure a smooth transition into a fully-enforceable AI Act, the European Commissioner launched an AI Pact.
The pact is a voluntary initiative that supports the future implementation and invites AI developers from Europe and beyond to comply with key obligations ahead of time.
He specialises in enhancing leadership, communication, and culture in SMEs.
Talexio’s platform also supports HR in addressing workforce challenges such as the gender pay gap.
The successor, Antti Kumpulainen, will stay on as CEO and Executive Board Member of Multitude Bank after the transition.
The new outlet, in Ghout Al Shaal, opened its doors at a Gym and Spa which 'shares the same company ...