What does AI mean for music producers?

what-does-ai-mean-music-producers-featured-image

Illustration: Leo Peng

Much like everywhere else, AI has become one of the go-to topics among music producers in 2023.

Warnings of an endless stream of automated music have been followed by nightmare scenarios of artist deep fakes and fraudulent uploads, all framed around a fear of the machine taking over our jobs, our art, and our lives. While there are certainly concerns around copyright, artist intellectual property, and ethics, what AI can bring to your creative process doesn’t have to be fully automated art. Instead, AI can be injected into your existing flow, solve mix problems, help with writer’s block, and introduce new creative processes that weren’t possible before.

What is AI actually doing?

There are many different ways you can use AI in your creative work. Some use AI to actually create the sounds. This could be from a prompt such as a text input, like in Google’s MusicLM, and is known as generative AI. Another could be using AI to help you in the process of composition, mixing, mastering, etc. by speeding up more administrative tasks, quickly finding the perfect sound, or ‘collaborating’ with you as you write and produce. This is known as assistive AI.

Although they’re not mutually exclusive, as a tool can be both assistive and generative, it’s helpful to think about these two categories for the sake of this article (and the examples below). And while these applications may seem simple, the complexity comes from how the models were initially trained. An AI model can only be as good as its dataset—that is the data that was inputted to train the model in the first place. It’s important to remember that no model, either assistive or generative, is infallible, especially for complex technical and subjective processes like music creation.

Even with the best intentions and the ‘cleanest’ data, there will likely be some artifacts and imperfections in the output or mistakes made by your new AI assistant. No AI tool is perfect—yet—so approach with a sense of curiosity and creativity instead of expecting a flawless result.

Is AI good for music?

Whether or not AI is good for music is a big question, given the extremely broad nature of the term ‘AI.’ But like any creative tool, it’s all about how you use it.

Anyone could go on Splice right now, download ten loops, drag them across their DAW, hit export, and call it a track. Is that creative? Sure, the loops are in key, everything’s in time, and it sounds like music. But, without human input, it’s just another piece of content.

I believe more in using generative and assistive AI tools to facilitate creativity, not cheat it. Any tool that can relieve friction in your creative process feels like a positive thing. There will always be those who want a one-button music generation solution, and perhaps for content like TikTok videos and podcast music, that could even be understandable. But when it comes to music making, I believe AI’s creative potential can be a force for good.

How can I use AI in my music production process?

There won’t be a one-size-fits-all approach to using AI in the studio, but here are some scenarios that might sound familiar where AI could step in.

Everyone knows how it feels to be stuck. Stuck in a 16-bar loop, stuck staring at a blank DAW, stuck scrolling through samples and presets, and getting nowhere. Splice’s new AI tool is specifically designed to tackle writer’s block. Born out of CoSo (Compatible Sounds), the new Create feature lets users quickly start a track by harnessing the power of Splice’s millions-strong Sounds catalogue in seconds.

Users start by selecting a style like Lo-Fi Wave or Hit Hop and are instantly met with six compatible loops from across Splice’s library. Each one is automatically pitch-shifted and time-stretched to match, and you can quickly swap out ideas using the refresh button. It’s an innovative way to get your creativity flowing, and when you’re ready to explore your idea further, you can export to Ableton Live, time-stretching and pitch-shifting intact. And if you want to access the audio samples that make up the idea, you can do that by exporting either as Stems that retain the pitch and time info, or simply the raw audio files in their original state.

While music production and mixing can be highly creative processes, they can also at times resemble admin. Your EQing technique can give you your style, but EQ can also simply be used to avoid ‘masking’—i.e. when frequencies from sounds like kicks and basslines are overlapping and causing muddiness. iZotope’s Neutron plugin is designed to ‘listen’ to the tracks in your session and make recommendations around EQ settings to avoid frequency clashes. It uses seven total modules including an exciter, compressor, distortion, and transient shaper. Each one can be activated depending on your taste and what Neutron ‘thinks’ is needed for each part of your song.

What about staring at a blank DAW, unsure of where to start? Or what if your music theory knowledge is limited and your chords are sounding stale? There are a few tools that can help you—Muse Spark can generate chords from a text prompt inside your DAW, and Orb Composer, part of the Orb Producer Suite, can create intricate and highly-customizable melodies and chord structures. Google’s Magenta Studio, first released in 2019, also lets you generate MIDI chords and arrangement patterns from scratch. For more AI music generation tools, check out AIVA or Boomy.

Stem separation is another concept that’s emerged from AI research, allowing you to isolate vocals, instruments, or other parts of a track from a mastered stereo file. Most tools are browser-based and certainly aren’t perfect, with artifacts leaking in—but as with most things in the AI space, expect them to quickly get better. Try out Moises, AudioShake, and LALAL.AI to hear this in action.

Finally, voice models are a revolutionary new way to capture the essence of a singer’s voice and transfer it to your own. While there aren’t new plugins offering this tech just yet, websites like Kits.ai and Voice-Swap allow you to choose singers from a drop-down list to model, or even train your own singing voice as an AI clone.

Who’s using AI right now?

While mainstream AI discourse may seem like a fairly recent phenomenon, artists have been experimenting with the technology for awhile. In 2020, producer and engineer Shawn Everett entered chords played by The Killers frontman Brandon Flowers in an OpenAI tool called Jukebox, and ‘asked’ it to extend the progression in the style of Pink Floyd. Elsewhere, 4AD artist Holly Herndon created an AI twin trained on her own voice, and made it available for fans to create their own Holly+ acapella. Later, Grimes did a similar thing, offering collaborators to use her voice for a 50/50 royalty split.

Where do we go from here?

As with every new technology, it’s about how it’s used. Autotune was designed to help singers stay in pitch but ended up becoming the go-to effect in rap and trap for more than a decade. Drum machines were designed to replace a traditional drummer in a band and ended up playing a huge role in the invention of house music. The Roland 303 was designed as a bass accompaniment for organ players and went on to create the iconic acid house sound. Humans are the reason technology redefines art, and AI is no different.

Splice’s Create feature is a great example of how AI can be used—not to write the music for you, but to augment your ideas while keeping you in control. Later this year, we’ll see some exciting new additions to the technology, including the ability to add your own audio to a Stack and find the perfect Sounds to match. If you’re new in your AI journey, it’s a great place to start—try it out here.

The potential scope of AI’s influence in music creation is vast, from mix problem-solving to mastering, composition assistance, voice modeling, and much more that can’t even be fathomed yet. Replacement narratives have always been a part of new technology, but I believe that AI can never replace human creativity. These tools, in the hands of artists, have the potential to open up a whole new paradigm of songwriting, sound design, and music production.


Find inspiration with Splice’s new sample discovery feature:

August 1, 2023

Declan McGlynn Declan McGlynn is a music technology journalist, consultant, and content creator. Across his career, he's worked with the likes of Rolling Stone, DJ Mag, Google, BBC, Water & Music, and many more. He also runs the weekly newsletter Future Filter.