When Will.i.am and Mercedes decided to rethink what it feels like to drive an electric car, they did not hire a film composer or a sound designer. They hired a technologist who used AI to break apart music into its component parts, drums, melody, vocal, synth, and then mapped those elements in real time to the signals coming from the car itself: acceleration, suspension, speed. The result was something entirely new. Not a playlist. A soundtrack generated live, shaped by the way you drive. That project, called Sound Drive, is a pretty good summary of where we are with AI and creativity right now.
One of the people behind it is Manon Dave, and he is one of the more interesting thinkers working at this intersection today. A software engineer who taught himself music production, a former ad-tech algorithm writer who ended up collaborating with artists like Snoop Dogg, Idris Elba and Hans Zimmer, Dave now holds the title of Head of Future World Design at BBC R&D, where he leads a team called BBC FWD. The mandate is to look around corners and figure out how the BBC’s century-long tradition of invention can reach entirely new audiences in the age of AI and immersive technology.
I recently sat down with Dave for a wide-ranging conversation on the state of AI in entertainment, and I came away more optimistic than ever, with a few important caveats.
The Synthesizer Was Synthetic Too
The anxiety around AI in the creative industries is real and understandable. But Dave puts it in a useful historical perspective. “I think technology for me has always been a kind of great enabler, something that levels the playing field a little bit,” he told me. He points to the synthesizer as a direct parallel. By its very name, it is something synthetic, something generative, something that arguably lacks organic origin. When it arrived, musicians worried it would make their skills redundant. Instead, it created entirely new genres, new aesthetics and new kinds of musicians.
The AutoTune comparison is even more telling. A decade ago, being caught using it was practically a scandal. Today, it is in every single professional studio on earth. As Dave observed, if every track that used AutoTune had to carry a label saying so, the vast majority of popular music would wear that badge. And yet nobody stopped listening.
His view on AI music follows the same logic: “I think we’re in a moment right now, and I’m pretty sure that moment will pass, and I think all that will remain is the imagination and ingenuity of the artists that leveraged the tool.” That framing shifts the conversation away from replacement and toward leverage. The question is not whether AI will change creative work. It absolutely will. The question is who gets to use it well.
My own take is this: AI gives superpowers to people who are genuinely talented or deeply knowledgeable. The person with real creative vision or hard-won expertise can use these tools to work faster, explore further and produce at a scale that was previously impossible. But hand those same tools to someone who lacks that foundation, and what you get is what we call slop. Technically passable, utterly soulless. Dave made exactly the same point when I raised it: “To anybody who really is fundamentally a creative or is in these industries, they know they can’t leave that untouched.”
AI As A Creative Collaborator, Not Just A Tool
One of the most useful reframes in my conversation with Dave was his insistence on thinking about AI as a collaborator rather than simply a utility. He points out that the average modern pop song is written by between four and seven songwriters. Collaboration is already the norm. What AI does is give you a sounding board that is always available, endlessly patient and capable of offering a diverse range of options when you hit a creative wall.
“Creators, in particular early adopters, are using AI tools for thought starters, for overcoming writer’s block or punching through an imaginative block,” Dave explained. The workflow he describes is one where the human remains firmly in charge of taste, judgment and vision, with AI compressing the latency between idea and execution. More colors in the palette, as he put it, rather than a replacement for the painter.
The implications for how creative roles evolve are significant. Dave is optimistic that AI will ultimately return creatives to the part of the process they find most meaningful. “I think creatives will spend more time on the fun, imaginative, exploratory side of things because the idea generation and adaptation speed is drastically reduced.” The tedious finishing work, the mastering, the rendering, the iteration, gets compressed. The ideation expands.
What BBC FWD Is Actually Building
Dave’s role at the BBC puts him in an unusual position. BBC R&D has a remarkable track record of inventing things the rest of the world eventually takes for granted: digital broadcast standards, 4K HDR, formats that became open-source infrastructure for the entire industry. The organization has historically done the research, built the technology, and then watched others commercialize it. BBC FWD exists to change that equation by finding ways to accelerate bringing inventions to market in exemplary ways through internal and external partnerships.
One project Dave discussed is called Signals, a reimagination of Ceefax, the teletext service that was essentially the internet before the internet, delivering real-time data over an aerial while you watched television. The question BBC FWD is asking is what that looks like today, when data can be contextual, personalized and potentially backed by large language models. What if your television screen could surface relevant real-time information tied to what you are watching, tailored to you, without disrupting the communal experience of watching together?
Beyond that, the team is exploring immersive formats, AI-enhanced audio experiences, adaptive learning tools for younger audiences, and new on-ramps for the creator economy. Dave describes the BBC’s Introducing platform, which already lets any U.K. musician submit tracks for radio airplay, as a model worth scaling: “If we could create avenues, or what I like to call the on-ramp for U.K. creatives to actually push their work out into the market, offer them tools that are pre-vetted and safe, trained equitably, to actually enable them to level up their outputs and a platform where they can collaborate and share those outputs, that’s the kind of thing we are interested in building.”
He also described the Blue Room, a space within BBC R&D where the team gets hands-on with every significant new piece of hardware and software emerging from events like CES. The kind of playground where future trends get stress-tested before they become products.
The Fight For Fair Attribution
For all the optimism, Dave is clear-eyed about what still needs to be fixed, and this is where the conversation got most urgent for me. Attribution and consent in AI training data are not abstract ethical questions. They are the difference between a creative economy that remains sustainable and one that gradually hollows itself out.
Dave draws on his background as a musician to make the point concrete. When you co-write a song, nobody stops mid-session to divide up the royalty shares. You create, you finish, and then somewhere down the line, there is a formal process of registration and attribution. That process is manual, slow and imperfect, but it exists. In AI training data, there is no equivalent. “This simply doesn’t exist in the creative industries,” Dave said. “There’s no bread-crumbing of who did what.”
He is pragmatic about the fact that, to a large extent, the ship has already sailed. The models are trained. Retroactive unwinding of petabytes of data is not realistic. What matters now is what happens next: the equitable training of future models, fair remuneration for the content upon which AI learns, and transparent systems for attribution when AI-generated content draws on identifiable human creative work.
He pointed to the BBC’s involvement in the Coalition for Content Provenance and Authenticity (C2PA) as a genuine example of institutional leadership. C2PA is an open protocol that embeds traceable metadata into audio-visual content, allowing anyone to verify the source and whether something has been altered. It is the kind of infrastructure that needs to exist at scale before the creative industries can trust AI pipelines. “My stance, and I know the BBC’s stance is also, that we are uncompromising in the sense that we want to champion creatives first,” Dave said.
The Road Ahead
Looking out ten years, Dave sees entertainment becoming radically more adaptive and personal. The shift he anticipates goes beyond personalized recommendations to something more like a genuinely bespoke operating environment, one shaped continuously by what you share with it, designed around your needs rather than built to a universal template. The interface itself becomes personalized, acquiring something like a character.
That vision comes with a responsibility that Dave takes seriously: the people designing these systems have to build them to serve users rather than extract from them. “One that hopefully is helpful to you rather than hindering you, one that hopefully shapes itself around what benefits you,” was how he put it.
What struck me most across our conversation was the consistency of Dave’s underlying philosophy: technology has always been a democratizing force for people who are willing to engage with it seriously and playfully. He cited a phrase from his family’s Indian heritage, “ram,” which translates roughly as “play, learn.” That approach, curious, hands-on, unafraid to break things, is how he has navigated every technological shift in his career. It is also, he argued, exactly the mindset that young creatives need to bring to AI right now.
His message to the graduating class of performing artists, whom he addressed at Trinity Laban, was not that AI would replace them. It was that the ones who thrive will be those who combine their 10,000 hours of craft with fearlessness about change. That combination, deep expertise and technological curiosity, is still very hard to replicate. And in a world where anyone can generate passable content at the push of a button, it is also the thing that will matter most.











