There’s an emerging front in Big Tech’s ongoing war for the future of text-to-music artificial intelligence.
In January, an academic paper illuminated that Google had been developing a tool called MusicLM, a language model driven by user prompts which outputs music. TechCrunch reported at the time that MusicLM was trained on a large-scale dataset consisting of over 280,000 hours of music. At the time, however, the aforementioned paper said the search giant had “no immediate plans” to release the tool to the public.
However, the company’s position has since changed. Google says they are now eager to see how the AI-driven technology can “empower the creative process.”
When looking at the broader landscape of emerging AI technologies and the arts, there’s presently much consternation over the commonly held idea that AI-generated content is inherently derivative, given it “trains” its ability to generate images, music, text and more by way of intaking existing works.
In one recent example, a viral AI-generated “collaboration” between Drake and The Weeknd was removed from streaming services at UMG’s request given the uncanny vocal resemblance to the track’s credited artists. MusicLM will reportedly avert similar controversy by not honoring requests that would prompt the generation of music based on an existing artist.
As of today, MusicLM is now publicly available in the AI Test Kitchen app on iOS as well as the web. You can sign up for the program’s waitlist here.