Ever set off for the gym with your headphones on and thought, I wish this song was better suited to my spin class? In one vision of where artificial intelligence (AI) is heading, you will soon be able to tailor an instant remix of the track to suit your mood or activity. And not just once, but to an almost infinite degree.
“If you think of a song as n-dimensional space, in whichever direction you move, the song changes,” explained Siavash Mahdavi, founder of AI Music at today’s CogX conference. He demonstrated the concept with a Dream Dragons track which became house, future bass, acoustic, electronica and trap, all depending on how far the dimensional controls were adjusted, but still using the same underlying song.
The idea occurred to Mahdavi when he came across a Tom Odell track having identified it using Shazaam, only to discover eight months later that it was actually a remix which he much preferred to the original. That gave him the idea of applying AI to deliver the ultimate in personalised soundtracking.
“The medium in which music is received may have changed, but what we do with it is the same - one-dimensional consumption. I wanted to look at dynamic co-creation, shape-changing music for the listener so everybody gets a different version of the song tailored to their individual preference,” he said.
Mahdavi’s background is in applying AI and 3D printing to the creation of personalised and even customised outputs, such as a training shoes project for Under Armour or facial reconstruction for a victim of Ukrainian gangsters. He sold his former busines to AutoDesk which has developed it as Dream Catcher where it now operates in an augemented intelligence design space, supporting human interaction with AI.
In case the idea of music morphing to fit everybody sounds like a radical decentring of the author, however, Mahdavi noted that there are parameters. “The artist can control how much listeners can customise their song. They can carve out a space to allow or restrict changes,” he said.
Across the media and entertainment industry, the idea of AI replacing humans has become a hot topic, with everybody from journalists to film makers both experimenting with AI as a productive tool and looking over their shoulders at it for its potential to make them redundant. A general sense of the current timeline for AI is that, while augmented intelligence is here and now, general AI that does human jobs better is still 20 to 50 years off.
At the BBC, experiments with machine learning (ML) are well underway to find how it can be harnessed as a productivity tool. “We are training a ML speech model on video feeds, turning them into text and allowing users to search for specific clips,” explained chief technology and product officer, Matthew Postgate.
As he joked, “most TV is made up of old TV” which is a very labour-intensive process. By allowing content to be found through text search, the need to scroll through hours of video is removed. Remarkably, a rough edit can even be assembled using the text interface. The model is being refined to ensure it can accurately cope with accents.
It is part of a move towards object-based broadcasting in which video, audio and concepts become objects that can be transmitted and switched around at will. “It is more like a games engine than video and it opens up the possibility of a personalised experience,” said Postgate.
Extrapolating from this shift, he pointed to other labour-intensive tasks which could be done by ML, such as fact-checking or spotting that a boom mike is in shot. “That gets expensive because you have to go back to the location you hired and reshoot,” he said.
But when it comes to the craft of programme making, AI and ML are still a long way from reaching the bar currently set by humans. To illustrate the point, Postgate showed a highlights video from the London 2012 Olympics. “The cuts in time to the music, the nuances in the story, the pace, emotions and context - those are very complex and subtle and have been done to elicit emotion,” he pointed out. Right now, AI is a long way from being able to match those very human capabilities, as the morning’s demo of a robot called Sophie revealled all too clearly. The machines may be getting smarter, but they still lack timing.