The AI Act, which will be introduced to the European Parliament in 2021, aims to create a solid legal framework to ensure that AI technologies are developed and deployed safely, particularly in high-risk sectors such as healthcare, transportation and law enforcement.
The AI Act aims to address growing concerns about the impact of AI on society by categorizing AI systems according to the level of risk they pose. High-risk AI systems that have the potential to impact human rights, safety and wellbeing are subject to the most stringent regulatory requirements. These systems include applications such as biometric identification, critical infrastructure management and AI used in medical devices.
Post-market surveillance
A cornerstone of the AI Act is the requirement for post-market surveillance of high-risk AI systems. This provision, contained in Article 72 of the Act, requires providers of high-risk AI systems to implement a comprehensive monitoring system to track the performance and impact of their AI technologies after deployment.
The purpose of post-market surveillance is to ensure that AI systems comply with the AI Act throughout their lifecycle. This monitoring includes collecting, documenting and analyzing data to identify potential problems, such as bias or discriminatory results. Providers must be prepared to take corrective action if their AI systems are found to be in violation of the Act’s requirements.
The AI Act’s emphasis on post-market surveillance reflects the EU’s commitment to creating a safer and more trustworthy AI ecosystem. By requiring continuous monitoring, the legislation aims to prevent harmful outcomes and maintain public trust in AI technologies.
Data storage
In addition to post-market monitoring, the AI Act requires providers of high-risk AI systems to keep strict records. This includes the automatic logging of events and decisions made by AI systems during their use. This is to ensure that AI systems are transparent and accountable and enable effective auditing and traceability.
Logging is particularly important for high-risk AI applications where a malfunction or error could have serious consequences. By keeping detailed records, providers can demonstrate that they are compliant with AI law and provide evidence in the event of a regulatory investigation or legal challenge.
Challenges
While the AI Act sets the bar high for the regulation of AI technologies, its implementation will pose significant challenges for companies. Companies will need to invest in new monitoring and record-keeping systems and develop strategies to comply with the Act’s requirements. As AI technology evolves, the EU will need to update the legislation to address new risks and ensure it remains relevant.
In summary, the AI Act represents a bold step forward in the regulation of AI and aims to protect society from the potential dangers of high-risk AI systems. By focusing on post-market surveillance and record keeping, the EU is setting a global standard for AI safety and accountability.