Meta Platforms, the owner of Facebook and Instagram, is one of a growing number of competitors in the field of artificial intelligence music generation, and on Tuesday (June 18), the company’s artificial intelligence research arm announced its latest progress in this field.
Meta’s Fundamental Artificial Intelligence Research (FAIR) team gives the world its first look Japan Brancha tool that can take a chord or beat and convert it into a complete music track.
Meta says the feature will give creators more control over the output of the AI music tool.
JASCO stands for “Joint Audio and Symbol Conditioning for Temporally Controlled Text-to-Music Generation,” and is comparable in quality to other AI tools “while allowing for better and more general control over the generated music,” Meta FAIR said in A blog post.
To demonstrate JASCO’s capabilities, Meta has released a music clips page in which simple public domain melodies are transformed into music tracks.
For example, a melody by Maurice Ravel Bolero It became “an ’80s pop song” and “an accordion and acoustic guitar ballad.” Tchaikovsky’s Swan Lake It became a “traditional Chinese repertoire of guzheng, percussion, and bamboo flute” and an “R&B repertoire of deep bass, electronic drums, and lead trumpet.”
“As innovation in the field continues to advance at a rapid pace, we believe collaboration with the global AI community is more important than ever.”
Yuan
Meta has been providing a large amount of artificial intelligence research results to the public. The company published a research paper outlining the work with JASCO, and later this month it plans to release the inference code under an MIT license and the pretrained JASCO model under a Creative Commons license. This means other AI developers will be able to use the model to create their own AI tools.
“As innovation in the field continues to advance at a rapid pace, we believe collaboration with the global AI community is more important than ever,” Meta FAIR said.
The latest innovation comes a year after Meta was released music generatora text-to-audio generator that creates 12-second tracks based on simple text prompts.
The tool was trained using 20,000 hours of music licensed from Meta for training the AI, as well as 390,000 purely instrumental tracks from Shutterstock and Pond5.
MusicGen is also able to use melodies as input, which according to some makes it the first music AI tool capable of turning melodies into fully developed songs.
Meta’s JASCO comes on the heels of several innovations in the field of artificial intelligence music announced in recent days.
On the same day Meta launched JASCO, Googleartificial intelligence laboratory, deep thinking, showcased a new video-to-audio (V2A) tool that can create soundtracks for videos. Users can enter text prompts to tell the tool what sound they want for the video, or the tool can create the sound itself based on what the video displays.
DeepMind describes this as a key part of being able to specifically use artificial intelligence tools to create video content. Most AI video generators only create videos without sound.
last week, Stable artificial intelligencethe company behind the popular AI art generator stable diffusionrelease Stable audio is turned ona free, open-source model for creating audio clips up to 47 seconds long.
Rather than creating songs, the tool is used to create sounds that can be used in songs or other applications, allowing users to fine-tune the product with their own custom audio data.
For example, a drummer can train a model on his own drum recordings to generate new and unique beats in his own style.
These types of AI tools stand in stark contrast to AI music platforms, e.g. share and sunwhich creates an entire track based solely on text cues.
Such tools often need to be trained on large amounts of material and have become the focus of the music industry amid suspicions that they are being trained on copyrighted music without authorization.global music business