Artificial Intelligence Governance Professional Exam 2025 – Complete Practice Test

Question: 1 / 400

What are the four categories of AI according to the EU AI Act?

Unacceptable Risk, High Risk, Limited Risk, No Risk

Minimal Risk, Moderate Risk, High Risk, Very High Risk

Standard Risk, Critical Risk, High Risk, Unacceptable Risk

Unacceptable Risk, High Risk, Limited Risk, Minimal or No Risk

The four categories of AI outlined by the EU AI Act are designed to classify AI systems based on the level of risk they present to users and society. The correct grouping begins with "Unacceptable Risk," which includes AI systems that pose a clear threat to safety or fundamental rights and are prohibited by the regulation. Following this, "High Risk" systems are those that, while allowed, require stringent compliance with governance standards due to their potential to cause significant harm if they malfunction. "Limited Risk" refers to AI systems that are subject to specific transparency obligations, ensuring that users are aware that they are interacting with an AI system. Lastly, "Minimal or No Risk" encompasses AI applications that are considered low-risk and therefore do not require regulatory scrutiny. This categorization allows for a flexible regulatory framework that can adapt to the varying impacts of AI technologies on individuals and society.

Other options might misstate these categories or introduce terms not aligned with the EU's classifications, thus not accurately reflecting the intended regulatory approach.

Get further explanation with Examzify DeepDiveBeta
Next Question

Report this question

Subscribe

Get the latest from Examzify

You can unsubscribe at any time. Read our privacy policy