EU picks experts to steer AI compliance rules – ThePrint – ReutersFeed

By Martin Coulter
LONDON (Reuters) – The European Union has picked a handful of artificial intelligence experts to decide how strictly businesses will have to comply with a raft of incoming regulations governing the technology.

WHY IT’S IMPORTANT

On Monday, the European Commission will convene the first plenary meeting of working groups — made up of external experts — tasked with drawing up the AI Act’s “code of practice”, which will spell out how exactly companies can comply with the wide-ranging set of laws.

There are four working groups, focused on issues such as copyright and risk mitigation. Experts selected to oversee the groups include Canadian scientist and “AI godfather” Yoshua Bengio, former UK government policy adviser Nitarshan Rajkumar, and Marietje Schaake, a fellow at Stanford University’s Cyber Policy Center.

Big tech companies such as Google and Microsoft will be represented at the working groups, as will a number of nonprofit organisations and academic experts.

While the code of practice will not be legally binding when it takes effect in 2024, it will provide firms with a checklist they can use to demonstrate their compliance. Any company claiming to follow the law while ignoring the code could face a legal challenge.

CONTEXT

AI companies are highly resistant to revealing the content their models have been trained on, describing the information as a trade secret that could give competitors an unfair advantage were it made public.

While the AI Act’s text says some companies will be obliged to provide detailed summaries of the data used to train their AI models, the code of practice is expected to make clearer just how detailed these summaries will need to be.

One of the EU’s four working groups will focus specifically on issues around transparency and copyright, and possibly result in companies effectively being forced to publish comprehensive datasets, leaving them vulnerable to untested legal challenges.

In recent months, a number of prominent tech companies, including Google and OpenAI have faced lawsuits from creators claiming their content was improperly used to train their models.

WHAT’S NEXT

After Monday, the working groups will convene three more times before a final meeting in April, when they are expected to present the code of practice to the Commission.

If accepted, companies’ compliance efforts will be measured against the code of practice from August 2025.

(Reporting by Martin Coulter)

Disclaimer: This report is auto generated from the Reuters news service. ThePrint holds no responsibilty for its content.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Todays Chronic is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – todayschronic.com. The content will be deleted within 24 hours.

Leave a Comment