- Expert opinion supports vzbv's call for more legal certainty for providers if AI systems undergo "substantial modification".
- Voluntary conformity assessment by third parties must be possible regardless of the risk level.
- Small and medium-sized providers must benefit from lower fees for external conformity assessments.
Supported by the expert opinion, vzbv is calling for the following adaptations to the proposed regulation:
- Clearer rules for more legal certainty and simplification for providers if (self-learning) AI-systems undergo a "substantial modifications" and have to undergo a new conformity assessment procedure.
- Providers can choose to involve external bodies in the conformity assessment for all high-risk AI-systems in accordance with Annex III, provided that the AI-System complies with all relevant harmonised standards. Otherwise, the involvement of an external conformity assessment body becomes mandatory.
- Creation of a framework that allows small and medium-sized providers to benefit from lower fees for external conformity assessments. This will allow them to increase consumer trust in their systems, which compensates them for competitive disadvantages vis-à-vis large providers.
- Each provider should be able to have a third-party conformity assessment carried out on a voluntary basis, irrespective of the risk level involved.
- Inclusion of systems for emotion recognition or biometric categorisation as high-risk AI in Annex III, so that these systems must comply with harmonised standards.
By the end of 2022, the European Parliament and the Council of the European Union intend to agree on their respective positions. Negotiations then will continue in a trilogue with the European Commission.
Background
Conformity assessments for high-risk AI systems are a central element of the AIA. With a conformity assessment external certification bodies or the operators of AI systems confirm to authorities and consumers that a system meets all relevant requirements. The aim is to prevent undue discrimination or systematic wrong decisions and to strengthen consumer trust in AI. This is important because AI systems are increasingly preparing or making important decisions about people – for example, in the field of insurance, with automatic selection of job applicants or facial recognition.