As outlined in the European Union’s (EU) Artificial Intelligence Act (AI Act), which was first presented in 2023, the AI Act establishes a common regulatory and legal framework for the development and application of artificial intelligence. In April 2021, the European Commission (EC) proposed the law, which was passed by the European Parliament in May 2024 following its proposal by the EC in April 2021.
EC guidelines introduced this week now specify that the use of AI practices whose risk assessment was deemed to be “unacceptable” or “high” is prohibited. The AI Act categorizes AI systems into four categories, each having a degree of oversight that varies.
It remains relatively unregulated for low-risk artificial intelligence such as spam filters, recommendation algorithms, and customer service chatbots, whereas limited-risk artificial intelligence, such as customer service chatbots, must meet basic transparency requirements.
Artificial intelligence that is considered high-risk, such as in medical diagnostics or autonomous vehicles, is subjected to stricter compliance measures, including risk assessments required by law.
As a result of the AI Act, Europeans can be assured of the benefits of artificial intelligence while also being protected from potential risks associated with its application. The majority of AI systems present minimal to no risks and are capable of helping society overcome societal challenges, but certain applications need
[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.
This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents
Read the original article: