Skip to main content

AI MUSIC: Make an ENTIRE ALBUM in 45 Minutes

Hold onto your headphones! I created a full musical album in under an hour using AI!

Story Driven Album Creator 👉 https://sendfox.com/lp/1yzqgy

This wasn’t just a one-prompt, push-button, get-album process—it was a creative challenge that let me focus on my strengths, maximizing AI’s role in achieving my creative vision. If you’ve seen my other videos, you know I like to push Generative AI tools to their limits. But why?

As I continue to work with more AI products, I’m getting a clearer picture of the future. And although I’m excited, it’s easy to dwell on the doom and gloom AI could bring to human creativity, value, and, most importantly, jobs. 

Many feel fear because they think our creativity may not be as unique or special as we once thought, or that the identity we’ve worked hard to build as artists could be easily overtaken by robots. And I get it, I really do.

However, the more I use these AI tools, the more I realize something about my personal creativity. It hasn’t gone away.  It’s been… enhanced.  And though this is a subject for another video, it sets the stage for the creative challenge ahead. 

I’m sure you’ve heard people creating one off bangers using SUNO and UDIO, the current favorites in the AI music creation space. The output of these services is phenomenal and now the fact that you can in-paint in UDIO really makes the service even more attractive. For this Album, however, I used a less popular AI service: Stable Audio.

Despite stability.ai being known for their flagship model, Stable Diffusion, their audio model, Stable Audio, seemed overshadowed in the media by the success of other platforms. So, I decided to give it a shot and really see what it is capable of. 

What I appreciate about Stable Audio is its partnership with an existing stock audio library, AudioSparx.. From Stabilty’s website they claim that “Stable Audio 2.0 was exclusively trained on a licensed dataset from the AudioSparx music library, honoring opt-out requests and ensuring fair compensation for creators.” 

To me this is everything.  I sincerely hope that this pattern continues for future versions of stable diffusion, midjourney and all future Ai models in general.  But enough of that, lets make an album, yeah?

The UI in Stable Audio is straightforward. You have your prompt box, prompt library, model, duration, and input audio.  When I first tried the stock prompts I got a pretty stock output. The music was okay, but nothing I would share.  There just didn’t seem a purpose to it really.  Cool, I prompted this one single lyricless song–naw. Not for me.

As a filmmaker and storyteller, I’ve always been fascinated by how music can evoke emotions and tell a story and I wondered if I could craft a musical experience that did just that. Not through lyrics, but through the raw emotions that music conveys.  Although I come from a musical family, I don’t really have a background in music myself. But what I do have is experience as a director, conveying the musical style, emotions, and feelings I envision for the music in my commercials and films.

So taking that same skill set, I went to chatgpt and had it work with me to create a process to make the prompts for an entire album of music, not just a single track.  I found this process extremely collaborative and creatively satisfying.  

I won’t bore you with the details of my chat gpt prompts because… I’ve developed a custom GPT just for you called the Story Driven Album Creator. It’s free if you have CHat gpt.  The link is in the description below.

But the overall process for generating my prompts looked something like this: 

I informed ChatGPT about the background of the project, the tools I was using, and my desire for the album to tell a story across all the tracks.

I then described the story I wanted to tell—a tale of redemption, of a man fighting against the odds and overcoming himself through sheer willpower.

To help ChatGPT understand the vibe I was aiming for, I then detailed the style of music I wanted— IN THIS CASE vaporwave and synthwave —along with some preferred musical instruments and my favorite artists in the genre.

Next, I worked with it to develop a prompt structure that I could easily use in Stable Audio to create the individual tracks, specifying the BPM to ensure the album’s cohesive flow, which contributed to the overall narrative.

The prompts looked something like this: Create a slow, moody vaporwave track with soft, wavering chords and a deep, throbbing bassline. Add subtle, swirling psychedelic effects. BPM: 80

After inputting the first few generated prompts into Stable Audio, the outputs were significantly better than my initial trials, and most of my album tracks came out perfect on the first try.

After about 40 minutes of generating, I had a full album. But because I’m a visual storyteller, I wanted to create some videos to go along with each track. 

Using my normal workflow of Midjourney, Runway, Photoshop and Aftereffects, I created awesome visuals to go along with the new music I just generated and I really love how this all came together as one cohesive project. It’s almost like this fusion of AI and traditional tools isn’t just enhancing our current artistic expressions—it’s paving the way for entirely new forms of creativity.

To ensure this wasn’t just a fluke, I replicated the process with the Story Driven Album Creator and Stable Audio twice more, successfully creating two more vaporwave albums for my new channel, Cinematic Ambience. If you’re up for a listen, click any of the links here to dive right in.