As the EU moves to enact new AI regulations, the rest of the world is slowly following suit. Glean useful insights from Boston Consulting Group’s Steven Mills and Kirsten Rulf on how best to navigate the complex landscape of AI regulation, without impeding your ability to innovate or jeopardizing the public good. Most companies that will implement AI models are far from ready, explain Rulf and Mills, as they’re making the mistake of waiting to see what the new AI regulations are before taking action. Learn what steps you can take to ensure you’re prepared to implement AI responsibly today, while maintaining your competitive edge.
AI companies must find immediate solutions to ensure compliance with emerging regulations.
Governments in Europe are scrambling to regulate AI, following heightened concern from the general public about its disruptive potential, yet must find ways to do so without impeding innovation. The EU’s AI Act lays out a risk framework for new AI use cases, creating the following risk categories: unacceptable risk, high risk, limited risk and little or no risk. The EU will prohibit AI systems that pose unacceptable degrees of risk. “High-risk” use cases are those that could cause emotional physical or financial harm to users (for example, influencing a user’s ability to access credit, health care or employment). The AI Act will require transparency, disclosure, certification and post-deployment documentation for any high-risk use cases. Companies that fail to adhere to the new requirements face fines of up to 6% of their global annual revenue.
After negotiations between the EU Commission and the European Parliament and council member...
Steven Mills is a managing director, partner and the chief AI ethics officer at Boston Consulting Group Gamma in Washington, DC. Kirsten Rulf is a partner and associate director at Boston Consulting Group in Berlin.