MBW Views is a series of exclusive columns written by notable music industry figures… who have something to say. The following column from OG.studio founder Ran Geffen Levy (pictured inset) provides insights into music technology startups, companies and venture capital firms. He is also the CEO of Amusica Song Management in Israel.
A few days ago I asked ChatGPT4-o to create a pop and punk version of Neil Young’s “Harvest Moon”.
It provided me with the musical score and adapted the lyrics to suit the genre according to my requirements, suggested instrumentation, explained to me in detail the changes it had made to the original Evergreen, and provided me with marketing tips.
Here is a transcript of the conversation.
Legal issues aside (I’ll get to those shortly), this is a good thing. Music creation is a binary code, a DNA—the songwriter owns it. Training artificial intelligence does not comply with any of the existing rules underpinning the music industry: it is not mechanical rights, it is not performance rights, and there is no compulsory license. It’s pure and simple: commercial uses of intellectual property rights.
Songwriters and music publishers will finally get their fair share. When it comes to data training, the AI doesn’t care who sang “Jolene,” it cares how it was written and why it became popular.
All we need to do is make sure they care Who wrote it and ensure they are compensated accordingly. The math is simple: one recording equals one recording, but a composition can equal an infinite number of recordings. When it comes to artificial intelligence, the power lies with the songwriter.
But now the composers are screwed
I was involved in one of the first ringtone licensing deals in the early part of this decade. We worked out a deal where the songwriter and publisher got 40% of the proceeds. The ringtones are MIDI files that sell like hot cakes, with all proceeds going directly into the author’s pocket.
Then came Polyphonic Ringtone, and we were asked to share the revenue with the record label. I picked up the phone and asked one of the major publishers I represented at the time, “What the hell? No recordings are being used. Why do we have to share revenue with the record labels?” His answer was: It’s complicated and there’s an explanation. My first One thought: We’re screwed.
A quarter of a century later, songwriters are (again) being stripped of their smallest, legitimate share of streaming wealth, while other players in the industry can keep their rates and enjoy Spotify’s aftermath surrounding monetization A plan for the streaming age.
The only thing that’s changed since then is that songwriters and publishers now have partners in the form of equity funds, which spend billions to acquire their stakes and therefore put themselves in the songwriter’s shoes. They want to maximize their return on investment. Look at how they react when someone devalues their assets.
Who poked the bear?
What does it take to create music using artificial intelligence? A platform where lyrics, compositions and sounds can be understood and produced.
Lyric Module (LLM): focuses on training song lyrics as well as metadata and theoretical information on how to compose songs.
I asked Claude, Reka, Gemini and ChatGPT4 to adapt Geoffrey Chaucer’s “The Song of Troilus” from Chaucer’s “Troilus and Criseyde” into modern British pop and rock music (Reka). Each AI returns a new version of the song, including a detailed explanation of the changes. When I asked, “How do you know what to do?” Rekha’s answer was: I spent years studying and analyzing different forms of literature and music. This practical experience gave me a nuanced understanding of how to adapt and transform work across different mediums and genres. : Three of the modules are capable of creating versions of Dámaso Pérez Prado’s “Mambo Number 5” in different genres (Reka, Gemini, ChatGPT). Claude declined due to copyright issues.
Composition module (LL.M.): receive training in melody and harmony (notes, chords) and music theory (counterpoint, Western harmony, etc.). No need to record.
OpenAI began its journey towards generative music composition with ChatGPT2-based MuseNet, which was trained solely on midi files and created a map showing how one composer was influenced by other composers. ChatGPT4 is able to generate the score and lyrics of a popular song based on Vivaldi’s “Springtime”.
At my request, it converted “Bohemian Rhapsody” to “Bohemian Jazz” (arranged by ChatGPT). Gemini was able to convert “Harvest Moon” to K-Pop (including sheet music and Korean lyrics), and Reka converted “Shallow” to a rock song. Claude can create new music genres and generate ABC notation codes to generate MIDI files.
Bottom line: We can now create new music using plain text or rearrange existing songs using LLM AI modules. Right now it’s bulky, but technology can bridge that gap in an instant.
Taken together, it appears that the LL.M. has been trained on the songwriter material to create derivative works protected by current law. Blackstone spent billions buying the rights to songs like “Harvest Moon” and “Shallow,” while Neil Young had a firm belief in using his music, and Queen were selling their records for sizable sums. What does this mean to them? Did an AI company just sting a bear?
Sound Mod: Get training in recording and understand the style of the recording artist and production.Create sounds using synthesis models without the need for any existing recordings to create MIDI files that can be converted into complete recordings
Fairly Trained founder Ed Newton-Rex has written two excellent analysis articles on Suno and Udio output. Most of his insights revolve around melodies, chords and lyrics. He mentioned one of the abilities of recording: style and the use of real (recording) artists and orchestras. Sam Altman tells Lex Fridman that the latter of all of the above should be compensated. The omission of songwriters and publishers is again striking, considering those real artists were performing the work of real songwriters.
Over twenty years ago I was asked to license a piece for an educational CD ROM. I don’t have any clue. I picked up the phone and called Jane Dyball (then WCM) and asked her for advice. “Let me tell you what I do when I get a request for a new form of license,” she said. “Put your fingers in your mouth, put them in the air, see which way the wind is blowing, and tell your price.”
To the songwriter, the wind seems to be blowing in the wrong direction. While the focus throughout the music industry revolves around the “fairly trained” narrative, it stands to reason that song rights holders should focus on “fair compensation” from AI companies and record owners.
Use the C word
Compensation is based on widely agreed-upon principles that AI companies should disclose the source of training material and that owners of such material can opt out. It’s covered in the Copyright Disclosure Act, the EU Artificial Intelligence Act, the IMPF Code of Ethics and IMPEL’s Sarah William’s take on the other two Cs (content and culture).
When it comes to data input, it’s simple: AI companies make data training agreements with content owners; OpenAI with Axel Springer, Apple with Shutterstock, UMG with Endel and BandLab. I don’t know if these deals are based on a one-time or recurring training fee and/or if there is equity upside. Whatever the deal is, we’re talking about a package deal with no actual parameters for royalties distribution.
To ensure fair distribution and create additional monetization opportunities for rights holders, the music industry must lobby for a vesting clause that forces AI companies to retain records of the data clusters used to generate new musical works. AI21Labs is doing this with text, Bria is doing it with images. Many solutions can be implemented on new and existing music training material sets.
Without the vesting provisions and additional compensation that enable AI musical output, we are in real danger of flat revenue streams going forward. Granting an input license without an output license is like granting synchronization rights without ensuring public performance revenue.
6 questions
Next week, music publishers and songwriters from around the world will gather around the tables at Grosvenor House Hotel to celebrate the skills and achievements of outstanding music makers: the Ivers Awards and the Polar Music Awards. These events are a good place to start answering questions like:
Are songwriters major contributors to AI profile training?
Will we allow record owners to license records for material training without the songwriter’s approval?
Will all music publishers fight on behalf of songwriters, regardless of affiliation?
Is royalty distribution parity minimized when the output contains recorded material?
Are we willing to withdraw publishing rights to ensure fair compensation, as UMPG has done with TikTok?
Can we put aside our egos and create a unified system to manage new revenue streams?
The answers to these questions will determine the future value of songwriters’ assets, the return on investment of billions of dollars invested by equity funds, the value of JKBX’s assets and the future role of collection associations, CISAC, IMPF, Ivor Academy and others.
I believe we have a one-time opportunity to create change, and I’m working with great people to develop a platform to support it.
Roberto Neri, CEO of the Evers Institute, elaborated on the urgency of this: “I believe it is more important now than ever to ensure that the interests of music creators are protected, championed, valued and A pivotal moment of recognition for the central and integral role they play in music creation.
Enjoy the Ivers and Polar Music Awards next week and start the battle today.global music business