Why Bill Gates Isn’t Too Worried About the Risks of AI

5 minute read
Updated: | Originally published:

Bill Gates outlined how he thinks about the risks from artificial intelligence (AI) in a blog post on Tuesday. While Gates remains excited by the benefits that AI could bring, he shared his thoughts on the areas of risk he hears concern about most often.

In the post, titled The risks of AI are real but manageable, Gates discusses five risks from AI in particular. First, AI-generated misinformation and deepfakes could be used to scam people or even sway the results of an election. Second, AI could automate the process of searching for vulnerabilities in computer systems, drastically increasing the risk of cyberattacks. Third, AI could take people’s jobs. Fourth, AI systems have already been found to fabricate information and exhibit bias. Finally, access to AI tools could mean that students don’t learn essential skills, such as essay writing—as well as widen the educational achievement gap.

While Gates did discuss the risk that AI “reflects or even worsens existing biases”, he did not discuss the inequalities associated with the development of AI systems. For example, outsourced workers for OpenAI in Kenya faced difficult working conditions and were paid less than $2 per hour. Gates also did not discuss the legal issues faced by AI developers that use large amounts of data from the internet to develop their systems. Comedian Sarah Silverman and two other authors recently sued Meta and OpenAI for copyright infringement, accusing the companies of using their books to train their AI systems without their consent.

Gates has a lot riding on the future of AI. Microsoft—the tech giant he co-founded and still owns a stake in—is one of the biggest investors in OpenAI, which is one of the companies leading the push in AI development and the creator of ChatGPT.


More from TIME


Read More: Why Microsoft’s Satya Nadella Doesn’t Think Now Is the Time to Stop on AI

While Gates acknowledges that “no one has all the answers,” he remains optimistic that humans can manage those risks. “The future of AI is not as grim as some people think or as rosy as others think,” he writes.

Read more: The A to Z of Artificial Intelligence

Here are the key takeaways:

Take comfort from history

Gates notes that action by governments, companies, and people has allowed us to mitigate the risks from new technologies in the past, and he believes that this will also be the case with AI.

Read more: The AI Arms Race Is Changing Everything

This will not be the first time that societies have been reshaped by a powerful new technology, after all. Gates gives a number of examples of ways in which we have adapted to and dealt with technological developments before. People have learnt to be more skeptical of “scams where someone posing as a Nigerian prince promised a big payoff in return for sharing your credit card number.” Presumably, he argues, they will develop the same instincts for AI-generated misinformation. While there are concerns that efforts to develop AI-powered cyberweapons could spiral into an arms race, Gates writes that states have coordinated to prevent arms races in the past. Similarly, new technologies have put some people out of work, but they also created new jobs, he notes.

Focus on the short- to medium-term

In his blog post, Gates only addressed “risks that are already present, or soon will be,” rather than longer-term risks, such as extremely powerful AI systems developing their own goals that might conflict with those of humanity. This doesn’t necessarily mean that Gates doesn’t take these risks seriously. Earlier this year, Gates, along with other tech leaders, signed a public statement that warned that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” But like other AI thinkers, such as Margaret Mitchell, Gates believes that “thinking about these longer-term risks should not come at the expense of the more immediate ones.”

Read More: Column: Pausing AI Developments Isn’t Enough. We Need to Shut it All Down

AI might provide solutions to the problems it creates

Gates suggests that AI might end providing solutions to problems that it creates. AI-powered deepfake detectors could counter AI-generated misinformation. AI could detect cybersecurity vulnerabilities and patch them before AI-powered cyberweapons exploit them, he writes.

This, along with his faith in society’s ability to manage technological change and in the benefits that technological change will bring, explains Gates’ cautious optimism about AI and his call for further development rather than a pause. Here, his thinking is more in-line with those developing AI systems. For example, OpenAI recently announced it was launching a “superalignment” team, which aims to develop AI systems that could help to “to steer and control AI systems much smarter than [humans].”

Correction, July 12

The original version of this story misstated the employment relationship between OpenAI and workers in Kenya. The workers in question were employed by Sama, which OpenAI paid for contract services, not directly by OpenAI; in addition, the contract between OpenAI and Sama has ended.

More Must-Reads from TIME

Write to Will Henshall at will.henshall@time.com