Riffusion

Riffusion

Uses Stable Diffusion models to generate music

Visit

About

Stable Diffusion is an open-source AI model that can generate images from text. Riffusion tweaked the model to make it able to create images called spectrograms, and then turn those into audio clips. Spectrograms are like pictures that show what different frequencies sound like at different times. They also made an interactive web app so anyone can type in a prompt to generate an audio clip, and then the app will make a smooth transition between different prompts or different seeds of the same prompt.

Categories

Join Our Newsletter
Find it useful?
Subscribe to get weekly recommendations for your profession
Related Tools
Jukebox
Jukebox
We’re introducing Jukebox, a neural net that generates music, including rudimentary singing, as raw ..
AIbstract
AIbstract
Generate and stream personalized, original and royalty-free music in real time
VirtuozyAI
VirtuozyAI
Virtuozy Pro is an AI music assistant that provides cutting-edge tools and services to help musician..
Endel
Endel
Personalized soundscape for focus, relaxation, and sleep
WarpSound
WarpSound
Warpsound is an AI music platform for creating and collecting music in a virtual metaverse. It utili..
Cyanite.ai
Cyanite.ai
Music tagging engine that uses AI to categorize music
Magenta Studio
Magenta Studio
Music plugins that use AI to generate music
Emusion
Emusion
A tool for music analysis and discovery based on user-submitted songs.
Mubert
Mubert
AI generated music
Samplette
Samplette
Randomized, uncleared music samples from YouTube