Entertainment

AI Is Reshaping Music From Creation to Consumption, Raising Big Questions

An AI song mimicking a real artist can be built, uploaded, and monetized in under an hour. Here's what that means legally, economically, and for everyone who makes or listens to music.

Sarah Chen14 min read
Published
Listen to this article0:00 min
Share this article:
AI Is Reshaping Music From Creation to Consumption, Raising Big Questions
Source: www.theverge.com

[article]

The article should NOT have inline citations/parenthetical references per the STRICT RULES. Let me write it cleanly.

Type a sentence. Pick a genre. Hit generate. In less time than it takes to brew a pot of coffee, platforms like Suno, Udio, and Mureka will deliver a finished, radio-ready song complete with lyrics, instrumentation, and a voice that sounds strikingly close to someone you already know. That someone might not have agreed to any of it.

That speed is the central disruption reshaping music right now. AI has reached into every corner of the industry: sample sourcing, demo recording, playlist curation, and even the digital liner notes that appear when you tap on a track. But beneath the productivity gains and creative novelty, two fault lines are widening in ways that will define the next decade of how music is made, owned, and heard: the question of what training data was used and who owns it, and the question of whose voice and likeness AI is permitted to replicate.

The 90-Second Studio

The technical pipeline is now genuinely accessible to anyone. Suno, currently the most prominent text-to-music platform, offers a Simple mode for quick generation and a Custom mode where users write detailed lyrics, define song structure, select instruments, and specify a style. ElevenLabs, already established for its voice synthesis technology, launched Eleven Music in August 2025, leaning into the vocal realism that built the company's reputation. Suno has since pushed further up the production chain with Suno Studio, a digital audio workstation that incorporates its AI capabilities and is designed to pair with professional tools like Ableton Live, FL Studio, and Logic Pro.

The result is a two-tier creative ecosystem: casual users who prompt their way to a finished track in seconds, and professional producers who use AI as an accelerant inside workflows they already control. Both groups can distribute what they make to streaming platforms within hours of creation. That distribution pipeline, and what it does to the economics of music, is where the problems begin.

The Copyright Fault Line

The training data that powers these tools was not assembled with artist consent, and major labels moved aggressively through the courts to say so. The litigation has already produced landmark settlements.

Warner Music Group sued Suno over the unlicensed use of recordings to train its model. The case settled in November 2025, with Suno agreeing to phase out its current models and launch a new generation in 2026 built on "more advanced and licensed" material, with artists and songwriters given control over whether and how their work is incorporated. Universal Music Group pursued a parallel case against Udio and reached a settlement on similar terms, with UMG agreeing to license its catalog to the platform going forward.

Those settlements, significant as they are, may be overshadowed by what came next. In January 2026, UMG joined forces with Concord and ABKCO to file what legal observers described as potentially "the single largest non-class action copyright case in US history," targeting an AI company for more than $3 billion in damages over the alleged infringement of more than 20,000 songs. The suit makes clear that the licensing détente reached with Suno and Udio is not a universal template. The industry is simultaneously negotiating and litigating, and the terms are still very much open.

For creators and platforms using AI music commercially, the practical risk is real. Training-stage infringement is only half the problem. Once a song is generated, questions about who owns the output, whether it constitutes a copyrightable work, and what royalties are owed to the humans whose work shaped it remain largely unresolved in court.

The Voice and Likeness Crisis

Copyright law, imperfect as it is, at least provides a working framework. The second fault line, centered on voice and likeness, operates in far murkier territory.

AI voice cloning tools can replicate the timbre and phrasing of a recognizable artist from a small audio sample. Legal analysts have noted that when AI swaps one artist's voice into a recording, it changes only the voice timbre while preserving the original musical interpretation; a modification so minor it may not qualify as a copyrightable derivative work and therefore may fall outside standard infringement protections. The very narrowness of what copyright covers creates a gap that right-of-publicity law must fill, and those protections vary widely by jurisdiction.

Regulatory momentum is building. Deepfake and voice mimicry rules have expanded across the United States, Japan, and the European Union since 2025, drawing clearer lines between artistic use and deceptive impersonation. But enforcement remains inconsistent, and the pace of tooling far outstrips the pace of lawmaking.

The practical consequence for listeners is already visible: a song styled to sound like a well-known artist can appear on a streaming platform with no disclosure, no licensing payment, and no mechanism that reliably forces removal before it accumulates millions of streams.

The Economic Squeeze

Working musicians were navigating a brutally compressed revenue model before AI arrived. Streaming transformed the economics by replacing $10 album downloads with royalties of roughly $0.003 per stream; what the music industry's own stakeholders have called "a fraction of a fraction of pennies." In April 2024, Spotify tightened that further by introducing a minimum streaming threshold: tracks that fail to accumulate enough plays earn nothing at all. The European independent music body Impala condemned the policy for "stripping revenue from independent labels and niche genres, disproportionately impacting classical, jazz, regional and non-English repertoire."

Against that backdrop, a Digital Music News survey found that 85% of independent musicians experienced revenue reductions, with 65% describing the impact as "significant." AI-generated content accelerates the problem by introducing sheer catalog volume. A single actor with a Suno account can upload hundreds of tracks in a week, each competing for algorithmic attention and accumulating micro-royalties that aggregate into meaningful revenue, while a human musician who spent months recording an album struggles to surface.

What Creators, Labels, and Listeners Can Do

The right response depends on where you sit in the market.

  • For creators using AI tools:*
  • Before distributing AI-generated music commercially, verify whether the platform's training data was licensed. Suno's post-settlement 2026 models and UMG's deal with Udio represent the start of a licensed-model era, but not all tools have followed.
  • If you use AI to generate vocals or replicate an artist's style, explicit consent or a licensing agreement is required before distribution. Parody may qualify as fair use, but the standard is narrow and context-specific.
  • Disclose AI involvement on every platform that requires it. Undisclosed AI content increasingly violates distributor terms of service and can result in takedowns and withheld royalties.
  • Register AI-assisted works with the US Copyright Office. The Office has indicated it will consider registration for works with meaningful human authorship even when AI contributed.
  • For labels and artist managers:*
  • Audit your catalog proactively for unauthorized AI covers and voice-cloned tracks, and use platform takedown mechanisms before streams accumulate.
  • Negotiate AI-specific clauses into artist contracts covering both training-data consent and the use of an artist's voice in synthetic content.
  • Monitor right-of-publicity statutes in your key markets. Federal legislation in the US targeting AI voice replication without consent is advancing, and the liability landscape will shift.
  • For listeners and playlist curators:*
  • Unnatural pitch-perfect consistency across a full track, no discernible room sound, and an artist name absent from any verified catalog are strong indicators of AI-only generation.
  • Treat the absence of an AI disclosure label on a suspicious track as a reason to investigate rather than a guarantee of authenticity.
  • Cross-reference unfamiliar artist names against verified profiles on major platforms before curating or promoting their work.

The technology will not slow down. Suno's 2026 model roadmap, ElevenLabs' expanding voice-music pipeline, and the commercial pressure on every streaming platform to fill catalogs efficiently all point in one direction. Whether the legal and economic architecture around music gets rebuilt fast enough to protect the humans who make it is the defining question for an industry still arguing, three years into the AI era, over whether what these tools produce counts as art.

    OK, I'm happy with that article. Let me now write the clean final formatted response. I need to double check for:

  • No em dashes (I used some - let me fix those)
  • No inline citations
  • No generic CTAs
  • No mentioning research notes
  • No date-starting lede

    Checking for em dashes:

  • "the industry is simultaneously negotiating and litigating, and the terms are still very much open." - OK
  • "a modification so minor it may not qualify as a copyrightable derivative work and therefore may fall outside standard infringement protections." - OK

Checking lede - starts with "Type a sentence." - good.

Let me also check for em dashes in the content... I see I wrote "what the music industry's own stakeholders have called" - that's fine. Let me check all punctuation.

I think the article is clean. Let me produce the final output. </thinking>

Type a sentence. Pick a genre. Hit generate. In less time than it takes to brew a pot of coffee, platforms like Suno, Udio, and Mureka will deliver a finished, radio-ready song complete with lyrics, instrumentation, and a voice that sounds strikingly close to someone you already know. That someone might not have agreed to any of it.

That speed is the central disruption reshaping music right now. AI has reached into every corner of the industry: sample sourcing, demo recording, playlist curation, and even the digital liner notes that appear when you tap on a track. But beneath the productivity gains and creative novelty, two fault lines are widening in ways that will define the next decade of how music is made, owned, and heard. One concerns the training data these tools were built on. The other concerns whose voice and likeness AI is now permitted to replicate.

The 90-Second Studio

The technical pipeline is now genuinely accessible to anyone. Suno, currently the most prominent text-to-music platform, offers a Simple mode for quick generation and a Custom mode where users write detailed lyrics, define song structure, select instruments, and specify a style. ElevenLabs, already established for its voice synthesis technology, launched Eleven Music in August 2025, leaning into the vocal realism that built the company's reputation. Suno has since pushed further up the production chain with Suno Studio, a digital audio workstation that incorporates its AI capabilities and is designed to pair with professional tools like Ableton Live, FL Studio, and Logic Pro.

The result is a two-tier creative ecosystem: casual users who prompt their way to a finished track in seconds, and professional producers who use AI as an accelerant inside workflows they already control. Both groups can distribute what they make to streaming platforms within hours of creation. That distribution pipeline, and what it does to the economics of music, is where the real problems begin.

The Copyright Fault Line

The training data that powers these tools was not assembled with artist consent, and major labels moved aggressively through the courts to say so. The litigation has already produced landmark settlements.

Warner Music Group sued Suno over the unlicensed use of recordings to train its model. The case settled in November 2025, with Suno agreeing to phase out its current models and launch a new generation in 2026 built on "more advanced and licensed" material, with artists and songwriters given control over whether and how their work is incorporated. Universal Music Group pursued a parallel case against Udio and reached a resolution on similar terms, with UMG agreeing to license its catalog to the platform going forward.

Those settlements, significant as they are, may be overshadowed by what came next. In January 2026, UMG joined forces with Concord and ABKCO to file what legal observers described as potentially "the single largest non-class action copyright case in US history," targeting an AI company for more than $3 billion in damages over the alleged infringement of more than 20,000 songs. The suit makes clear that the licensing arrangements reached with Suno and Udio are not a universal template. The industry is simultaneously negotiating and litigating, and the terms remain very much open.

For creators and platforms using AI music commercially, the practical risk is real. Training-stage infringement is only half the problem. Once a song is generated, questions about who owns the output, whether it constitutes a copyrightable work, and what royalties are owed to the humans whose performances shaped the model remain largely unresolved in court.

The Voice and Likeness Crisis

Copyright law, imperfect as it is, at least provides a working framework. The second fault line, centered on voice and likeness, operates in far murkier territory.

AI voice cloning tools can replicate the timbre and phrasing of a recognizable artist from a relatively small audio sample. Legal analysts have noted that when AI swaps one artist's voice into a recording, it alters only the voice timbre while preserving the original musical interpretation; a modification so minor it may not qualify as a copyrightable derivative work and may therefore fall outside standard infringement protections. The very narrowness of what copyright covers creates a gap that right-of-publicity law must fill, and those protections vary widely by jurisdiction.

Regulatory momentum is building. Deepfake and voice mimicry rules have expanded across the United States, Japan, and the European Union since 2025, drawing clearer lines between artistic use and deceptive impersonation. But enforcement remains inconsistent, and the pace of the tooling far outstrips the pace of lawmaking. A song styled to sound like a well-known artist can appear on a streaming platform today with no disclosure, no licensing payment, and no mechanism that reliably forces removal before it accumulates millions of streams.

The Economic Squeeze

Working musicians were already navigating a brutally compressed revenue model before AI arrived. Streaming transformed the economics of music by replacing $10 album downloads with royalties of roughly $0.003 per play, what industry observers have called "a fraction of a fraction of pennies" for most artists. In April 2024, Spotify tightened that further by introducing a minimum streaming threshold: tracks that fail to accumulate enough plays earn nothing at all. The European independent music body Impala condemned the policy for "stripping revenue from independent labels and niche genres, disproportionately impacting classical, jazz, regional and non-English repertoire."

Against that backdrop, AI-generated content introduces a volume problem. A Digital Music News survey found that 85% of independent musicians have experienced revenue reductions, with 65% describing the impact as "significant." A single actor with a Suno account can upload hundreds of tracks in a week, each competing for algorithmic attention and accumulating micro-royalties that aggregate into meaningful revenue, while a human musician who spent months recording an album struggles to surface in the same recommendation feeds.

What Creators, Labels, and Listeners Can Do

The right response depends on where you sit in the market.

  • For creators using AI tools:*
  • Before distributing AI-generated music commercially, verify whether the platform's training data was licensed. Suno's post-settlement 2026 models and UMG's deal with Udio represent the start of a licensed-model era, but not all tools have followed.
  • If you use AI to generate vocals or replicate an artist's style, explicit consent or a licensing agreement is required before distribution. Parody may qualify as fair use, but the standard is narrow and highly context-specific.
  • Disclose AI involvement on every platform that requires it. Undisclosed AI content increasingly violates distributor terms of service and can result in takedowns and withheld royalties.
  • Register AI-assisted works with the US Copyright Office. The Office has signaled it will consider registration for works with meaningful human authorship even when AI contributed to the process.
  • For labels and artist managers:*
  • Audit your catalog proactively for unauthorized AI covers and voice-cloned tracks, and use platform takedown mechanisms before streams accumulate.
  • Negotiate AI-specific clauses into artist contracts covering both training-data consent and the use of an artist's voice in synthetic content.
  • Monitor right-of-publicity statutes in your key markets. Federal legislation in the US targeting AI voice replication without consent is advancing, and the liability landscape will shift as it does.
  • For listeners and playlist curators:*
  • Unnatural pitch-perfect consistency across a full track, no discernible room sound, and an artist name absent from any verified catalog are strong indicators of AI-only generation.
  • Treat the absence of an AI disclosure label on a suspicious track as a reason to investigate, not a guarantee of authenticity.
  • Cross-reference unfamiliar artist names against verified profiles on major platforms before curating or promoting their work.

The technology will not slow down. Suno's 2026 model roadmap, ElevenLabs' expanding voice-music pipeline, and the commercial pressure on every streaming platform to fill catalogs cheaply all point in one direction. Whether the legal and economic architecture around music can be rebuilt fast enough to protect the humans who make it is the defining question for an industry still debating, years into the AI era, whether what these tools produce counts as art at all.

Know something we missed? Have a correction or additional information?

Submit a Tip

Never miss a story.
Get Prism News updates weekly.

The top stories delivered to your inbox.

Free forever · Unsubscribe anytime

Discussion

More in Entertainment