Musicians Are Already Using AI More Often Than We Think 

It goes beyond the rise of deepfakes and ChatGPT. AI tools will—and already are—changing how music is made.
Art of a Ramones album cover in a digital illustration
Image by Marina Kozak

Six months ago, no one knew what ChatGPT was, let alone how to use it to automate their entire life. Since then, an AI-infused world has gone from a theoretical dystopia to a depressing, ever-changing reality. As is often the case with new technologies, music has been right out in front. Last month, a song featuring AI-generated vocal clones of Drake and the Weeknd, “Heart on My Sleeve,” by then-unknown TikTok user Ghostwriter97, racked up millions of streams before the inevitable takedown notices and confusing label comments. A week later, Grimes endorsed replicating her voice for AI songs, and even offered to split the royalties. In the days that followed, the pace of fresh developments related to the use of AI in music creation was dizzying, from Spotify purging thousands of AI tracks to a scammer selling fake, AI-generated Frank Ocean leaks

Although the basic tech supporting vocal deepfakes has been around for a few years, and early adopters like Holly Herndon have long championed their creative potential, AI-generated music has finally gone mainstream. In the bright glare of the spotlight, it’s easier than ever to see how AI software could dramatically reshape the way music is conceived and recorded, providing new automated creative tools while threatening entire job categories—and that’s just in the short term. 

Shawn Everett, the Grammy-winning engineer and producer behind albums by Kacey Musgraves, the War on Drugs, and Alvvays, compares the advent of AI in music to the advent of the electric guitar or sampling. “As far as songwriting and production goes, we’re on the cusp of a wave of something that I don’t think we’ve really seen, maybe ever,” he says.

Everett paid attention in 2020 when OpenAI put out a tool for creating songs in various artists’ styles, complete with vocals. He even experimented with that tool while working on a song by the Killers that has never been released. Everett recalls inputting a chord progression that frontman Brandon Flowers had written and instructing the AI to continue it in the style of Pink Floyd, with a certain emotional tenor, only to have the AI spit out unexpected melodies. “What was happening was so different, and was landing in locations that no human being would normally think of, but it still felt rooted in something familiar,” he says. “I thought it was such a cool song.”

What’s coming next, Everett predicts, will be AI tools that can quickly combine ideas for melody, chords, and rhythm, similar to how programs like Midjourney and Dall-E, which generate images from natural language prompts, have shaken up visual art. Within a year or two, he speculates, the thousands of plugins in digital audio workstations like Pro Tools could merge into a single plugin that seamlessly carries out the user’s verbal requests. As an engineer, he wonders if he will ask the AI to set the EQ for a particular drum style—say, Metallica’s—or if the tech will eventually be able to spit out a replicated Lars Ulrich drum performance that sounds better than any drums he (or anyone else, for that matter) could have mic’d. “Obviously that’s a horrifying scenario for a lot of people, but it’s probably gonna happen,” Everett says. 

Danny Wolf, an Atlanta-based, Mexico City-born producer with credits for Lil Tecca and Lil Uzi Vert, is matter-of-fact about the role of AI in his work. “I’m actually using ChatGPT right now,” he says, for a solo album he’s putting together that promises guests such as Rosalía and Karol G. “I told it my story, and it made a whole album concept for me—down to the tracklisting—and I’m using it.” 

Wolf says he has been tapping into AI tools for three or four years, including on an unreleased Juice WRLD collaboration called “Sexual Healing.” “I used an AI to create a symphony and then sampled the symphony,” he says. “You can hear the complex melodies in it.” He uses an AI program to remove vocals from beats, the AI text-to-image tool Canva for artwork, and ChatGPT for business matters as well. “I was just feeding it information—we were pretty much going back and forth,” he says. “It was telling me where to allocate my budget—10 percent content creation, 3 percent email marketing—and breaking down how I needed to do this whole album rollout.”

Although AI may be routine for Wolf, he doesn’t downplay the tech’s potential ramifications. “It’s gonna put a lot of people out of business, for sure,” he says. “Producers definitely need to figure out a new way to adapt in the next couple of years. It’s gonna do everything. It’s gonna engineer, mix, and master.”

Rick Beato, a YouTube music personality and veteran producer, echoes his peers’ concerns about cuts to recording studio personnel. “Mastering engineers will be the first to go, and then mixing engineers,” he says, predicting an AI mixing/mastering tool that can mimic anyone’s style. There are already robotic microphone stands being used by engineers via an app—these could be controlled by an AI instead, eliminating more work.

Beato sees AI tools as perhaps the end of a long continuum of ways that music-making has tried to move past individual human limitations. In songwriting, rhyming dictionaries and co-writers have been available basically forever. Now lyricists can feed their verses into ChatGPT and ask for a better version, or prompt the bot to rewrite the song in someone else’s style. “I know in Nashville songwriting sessions, they’re using ChatGPT,” Beato says. On the production side, in his view, tools like Auto-Tune and timing correction software helped replace session musicians. Or take guitar amplifiers: “Very few people mic guitar amplifiers anymore,” Beato says—instead, many use digital models, which can recreate the sounds of different types of amps. If listeners became accustomed to Adam Levine’s cyborg croon 20 years ago, surely some will accept the latest AI iterations as well. 

It’s all happening extremely fast. Peter Martin, an Oscar-nominated film producer who has worked on virtual-reality projects with Justin Timberlake, Run the Jewels, and Janelle Monáe, recalls how it would have taken 11 hours of a voice for an AI to be able to mimic it just three years ago. “That is now down to less than two minutes,” he says. Tools for creating AI music are becoming less labor-intensive too. Generating a fake Drake song might involve four or five different AI tools now, Martin explains, including a lyric generator, a beat generator, a melody generator, vocal cloning, and vocal synthesis. But he’s beginning to see one-stop shops like Uberduck. Plus, AI tools that can emulate hundreds of vintage synths—or combine them into previously unheard sounds—are already commercially available. “AI can create an entirely new rhythm that maybe a human couldn’t have executed,” Martin says. “You can create an audio version of a visual concept: What does a tree sound like?” 

And with the desert-level thirst surrounding the acquisition and promotion of classic artist catalogs, it wouldn’t be a stretch to imagine AI-generated “new” songs by long-dead icons. (Universal Music Group has already called on Spotify and Apple Music to block AI companies from using its music to “train” their technologies.)

Economic displacement is the core concern beneath some of the more visceral reactions people have to AI, says Meredith Rose, senior policy counsel at nonprofit advocacy group Public Knowledge. “This is going to put actual humans out of creative industries,” she says. “Maybe this will mean that we start thinking about better ways of making sure that people can support themselves.” VR pioneer Jaron Lanier recently called for a concept of “data dignity,” where people might get paid for what they create even when it is channeled through big AI models, though how this would work in practice is still a matter of much debate. “The worst of all outcomes is where it’s just controlled by the same tech conglomerates who everyone is already very worried about for a whole host of reasons,” Rose says.

The viral saga of fake Drake has focused public attention on the rapid evolution of AI music, but what’s coming next could be far more significant. At their best, new AI tools might radically democratize music creation so that, as with the use of samples in early hip-hop, people who might not previously have even been considered musicians can write and produce songs. 

Over the short term, AI looks like it could be the latest step in the computerization of music, different from Pro Tools or Auto-Tune in scale rather than in kind. Look out a little further into what could happen as the AI models progress, though, and the mind boggles. “We could have some kind of utopia where any kind of music you could imagine could be generated instantaneously—if you want to hear Lithuanian Metallica music, you’re going to just press a button,” Everett says. “But at the same time, if you have access to that, what does that mean for art?”