Report: U.S. Chamber of Commerce Call for AI Regulation Framework

ODSC - Open Data Science
3 min readMar 23, 2023

The U.S. Chamber of Commerce, a leading business lobbying group, made headlines recently by calling for regulation of artificial intelligence technology to ensure it does not harm growth or pose a national security risk in a new report released by the group. It asks that policymakers step up their efforts in establishing a “risk-based regulatory framework” to ensure the technology is deployed in a responsible manner. This represents a significant shift in the group’s usual stance of being hesitant on calls for regulation.

The Chamber’s change in position on AI regulation comes at a time when the emerging technology is increasingly being integrated into a wide range of industries, from healthcare to finance to transportation. While AI has the potential to bring about significant benefits, such as increased efficiency, cost savings, and improved decision-making, there are also concerns about its potential negative impacts, partially involving bias and labor.

For example, there are worries that AI could exacerbate existing inequalities, reinforce biases, and lead to job losses as programs such as ChatGPT becomes more powerful and able to do uniquely human actions. Furthermore, there are concerns about the potential misuse of AI for nefarious purposes, such as cyberattacks or disinformation campaigns. This is particularly worrying given the increasing use of AI in national security and defense.

So it seems that the Chamber’s call for AI regulation could be seen as an acknowledgment of the potential risks associated with AI and the need to mitigate these risks. In a statement, the Chamber said that “AI can bring significant benefits to our economy and society, but we need to ensure that its development and deployment are guided by ethical principles and best practices that promote innovation, protect privacy, and maintain national security.

The Chamber’s statement also highlights the need for collaboration between the private sector, government, and academia to develop and implement responsible AI policies. This is an important point, as it recognizes that AI regulation cannot be left solely to the market or to the government, but rather requires a multi-stakeholder approach.

This is why within the report, the chamber isn’t requesting what some would deem a one-size-fits-all approach. The report says in part, “Rather than trying to develop a one size-fits-all regulatory framework, this approach to AI regulation allows for the development of flexible, industry-specific guidance and best practices.”

As AI continues to be used in a diverse set of industries and other sectors of social life, the calls for further study and efforts in responsible AI grow as well. Clearly, this will be a conversation that the data science community, academia, and others will continue to return to in the future.

If you’re interested in Responsible AI, and the impacts artificial intelligence has on society, then you’ll want to check out ODSC East 2023. With eleven tracks focusing on the biggest challenges, questions, and issues surrounding AI, you won’t want to miss your chance to hear from the biggest names in data science.

Originally posted on OpenDataScience.com

Read more data science articles on OpenDataScience.com, including tutorials and guides from beginner to advanced levels! Subscribe to our weekly newsletter here and receive the latest news every Thursday. You can also get data science training on-demand wherever you are with our Ai+ Training platform. Subscribe to our fast-growing Medium Publication too, the ODSC Journal, and inquire about becoming a writer.

--

--

ODSC - Open Data Science

Our passion is bringing thousands of the best and brightest data scientists together under one roof for an incredible learning and networking experience.