Former Google CEO Warns Current AI Guardrails Aren’t Enough

ODSC - Open Data Science
2 min readDec 12, 2023

During Axios’s AI+ Summit, Former Google CEO Eric Schmidt warned that current AI guardrails aren’t enough to prevent harm. The former Google chief went on to compare the development of artificial intelligence to the introduction of nuclear weapons during the second World War.

While speaking, Eric Schmidt said that the AI guardrails companies are adding to their AI-powered products “aren’t enough” to address the potential harms. In his view, humanity could be endangered within the next ten to five years.

As many have seen, AI safety has been a hot topic issue as AI has continued to scale across the globe. Different governing bodies have attempted to address concerns related to AI, data privacy, safety, and other dangers.

Back at the AI Summit, Schmidt said of compared AI’s development to that of nuclear weapons. He said in part, “After Nagasaki and Hiroshima, it took 18 years to get to a treaty over test bans and things like that. We don’t have that kind of time today.”

From his perspective, the danger arrives at “the point at which the computer can start to make its own decisions to do things.” This becomes even more sensitive as AI begins to be integrated into weapon systems and other defense systems.

Now what moves need to be made to ensure AI safety? In Eric Schmidt’s view, it will take a global body, similar to the Intergovernmental Panel on Climate Change, to “feed accurate information to policymakers.”

With a body such as that, policymakers across the globe would have the information they need to make rational decisions related to AI. Even though Eric is concerned about AI, he still has an optimistic view of the technology.

In particular, how AI could become a net benefit for the entire human population. “I defy you to argue that an AI doctor or an AI tutor is a negative. It’s got to be good for the world.” Eric Schmidt’s view on AI isn’t unique.

As we saw this year. Professionals in and out of tech have pushed for greater accountability and safety research for AI. Some have even gone so far as to petition for a pause in research on LLMs more powerful than GPT-4.

While 2023 was the year of AI adoption, 2024, may be the year of AI safety.

Originally posted on OpenDataScience.com

Read more data science articles on OpenDataScience.com, including tutorials and guides from beginner to advanced levels! Subscribe to our weekly newsletter here and receive the latest news every Thursday. You can also get data science training on-demand wherever you are with our Ai+ Training platform. Interested in attending an ODSC event? Learn more about our upcoming events here.

--

--

ODSC - Open Data Science

Our passion is bringing thousands of the best and brightest data scientists together under one roof for an incredible learning and networking experience.