UPDATED 13:40 EDT / SEPTEMBER 13 2023

AI

Stability AI debuts latent diffusion AI platform for generating audio

Stability AI Ltd. today introduced Stable Audio, a software platform that uses a latent diffusion model to generate audio based on users’ text prompts.

The platform can generate up to 95-second clips across a variety of music genres. According to Stability AI, Stable Audio also lends itself to creating other types of audio including sound effects.

London-based Stability AI is backed by more $100 million in venture funding. The company is best known for its image generation models, Stable Diffusion and Stable Doodle, which take text instructions and user-provided doodles as input, respectively. Stability AI has also released an open-source language model that can generate code and text.

Artificial intelligence audio generators like the company’s newly launched Stable Studio platform are usually implemented using so-called diffusion models. Those are neural networks built with a training dataset into which errors were deliberately introduced. The errors, which are known as Gaussian noise, teach a diffusion model to study the audio files in its training dataset and generate similar files on its own.

Stability AI argues that such AI systems have two major limitations. 

Diffusion models, the company says, are usually limited to generating audio snippets of fixed length. An AI trained on 30-second sound snippets, for example, can’t produce 40- or 20-second files. Furthermore, clips generated by such models often start in the middle or end of a musical phrase, which hurts their quality.

To overcome those limitations, Stable Audio uses a specialized type of diffusion model known as a latent diffusion model. What sets such models apart from the standard variety is that they’re always used together with a second neural network called an autoencoder.

An autoencoder is an AI that can take a piece of data as input and remove unnecessary information. Such a model could, for example, ingest an audio file that contains background noise and filter the background noise. The autoencoder stores the remaining information in a mathematical structure known as a latent space.

A standard diffusion model is created using raw training datasets. A latent diffusion model, in contrast, is built using a refined version of the same training datasets from which an autoencoder has removed unnecessary information. Because the refined datasets are of higher quality, the latent diffusion model that was trained on them can generate higher-quality output. 

Stability AI’s new Stable Audio platform comprises not one but three neural networks. Its core component is U-Net, a latent diffusion model with 907 million parameters. It’s an enhanced version of an existing neural network, called Moûsai, that was released earlier this year. 

Stability AI trained U-Net on more than 800,000 audio files from stock music provider AudioSparx. According to the company, those files contain about 19,500 hours of audio. Stability AI also added in text-based metadata, or contextual information, to optimize the AI training process.

Stable Audio combines U-Net with two other neural networks. One is an autoencoder, while the other is responsible for translating user prompts describing what audio should be generated into a form that U-Net can understand.

The platform can generate 95 seconds of audio with a 44.1 kHz sample rate in under one second when running on an A100 graphics processing unit. The A100 was Nvidia Corp.’s flagship data center GPU until it was succeeded by the faster H100 last year.

Looking ahead, Stability AI plans to enhance both its audio generation models and the dataset used to train them. The company will also release open-source models based on Stable Audio.

Image: Stability AI

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU