The Velvet Sundown and the Band That Wasn't There

Image courtesy of the Velvet Sundown

In early July, a mellow folk-rock track called "Dust on the Wind" reached the No. 1 spot on Spotify’s Viral 50 chart in Britain, Norway, and Sweden. The song’s ’60s-inspired harmonies and anti-war lyrics struck a chord with listeners, helping the band—a newcomer called The Velvet Sundown—amass over a million streams in weeks.

With a sepia-toned album cover and a lineup of four smiling “members,” The Velvet Sundown looked every bit like a real classic-rock revival act. Fans had no reason to suspect otherwise – until the group revealed that none of its members actually exist.

In a statement on its Spotify page, the band admitted it was “composed, voiced and visualized with the support of artificial intelligence,” guided by a behind-the-scenes human creative team. 

“Not quite human. Not quite machine. The Velvet Sundown lives somewhere in between,” the bio announced. 

The admission confirmed swirling suspicions. In reality, The Velvet Sundown’s retro sound was an elaborate simulation. The band’s creators—still anonymous—had used AI tools to generate everything from the musicians’ likenesses to their music and lyrics. What seemed like a flesh-and-blood quartet of young rockers was in fact a clever mirage. The project’s goal, according to its creators, was an “artistic provocation” meant to “challenge the boundaries of authorship, identity and the future of music itself in the age of AI.”

The Velvet Sundown’s viral rise and sudden unmasking marked a watershed moment. Yes, computers have been composing music for decades: experiments date back to at least 1957’s Illiac Suite, a string quartet piece generated on a University of Illinois computer. 

But never before had an AI-generated band topped popular charts and won over masses of everyday listeners. The achievement was equal parts technological feat and cultural litmus test. If a catchy song can capture millions of streams without a human performer in the studio or on stage, what does that say about the nature of creativity and the connection between artist and audience?

The curious success of this phantom band forces us to examine a newly urgent question: when the machine sings, are we still listening with the same hearts and ears? Or does something ineffable change in the experience?

A Tool in Visual Arts, An Author in Music?

Artificial intelligence has been creeping into creative fields for years, but its role varies widely from medium to medium. In illustration, design and animation, AI is most often treated as a sophisticated tool rather than an autonomous artist. 

Graphic designers might use AI image generators like DALL-E or Midjourney to spark ideas or fill in background details, much as they use Photoshop’s filters, the creative vision still ultimately belongs to the human directing the process. 

In animation, studios have started experimenting with AI to assist in tedious tasks: for instance, generating in-between frames or background art. Many animators and illustrators view generative AI as a collaborator or labor-saving device, not a replacement for human imagination.In these visual fields, the technology tends to function like a fancy paintbrush or camera—powerful in skilled hands, but not itself the artist.

Music, however, is testing a different paradigm. The Velvet Sundown was not marketed as AI-assisted” or “computer-augmented” music. It was simply presented as music—and embraced by listeners on those terms until the truth came out. 

This highlights a key contrast: we are now seeing AI move from behind-the-scenes helper to center-stage performer. An AI can generate a painting or a logo, but rarely would the public mistake the AI program itself as a celebrated painter or designer. 

In popular music, by contrast, an AI-driven act can emerge and be consumed just as any human artist would be. The role of AI is shifting from mere instrument to apparent author, raising new questions about credit, creativity and the very definition of an artist.

Why is it that a machine-generated picture might be seen as a nifty novelty, while a machine-generated song—especially one with vocals—triggers a deeper unease? 

Part of the answer lies in how personal and embodied music is as an art form. Listeners form strong emotional bonds with songs and often with the people singing them. We’re used to thinking of songs as expressions of a singer’s soul, or at least a reflection of some human experience. 

An image can certainly be personal to its creator, but a painting or graphic doesn’t literally have a voice. Music, especially songs with vocals, arrive imbued with the human qualities of the performer: breath, timbre, phrasing, the slight strain or rasp that conveys feeling. In improvisational genres like jazz, music lives in the moment of performance. Even in recorded pop music, where multiple takes and digital edits are the norm, fans often take comfort knowing there’s a human behind the microphone, channeling a lived experience or mood.

The Intimacy of the Human Voice

The human voice, in particular, makes music feel uniquely intimate. Think of the fragile quiver in a singer’s voice during a ballad, or the passionate crack when someone reaches for a high note and barely grabs it. This kind of expressive nuance, born of a performer’s body and biography, is hard to fake. An AI voice can hold a pitch perfectly and even add stylistic vibrato, but does it mean anything when it sighs or growls? Without a true biography or inner life, a synthetic singer can only approximate the emotional gravity that a human voice carries by default. 

Music is also a communal art in ways visual media typically are not. We gather at concerts to share in the energy of performers and fellow fans; we bond over mixtapes and anthems that soundtrack our lives. Songs often carry autobiographical significance for listeners and artists alike. When people connect to a song, they frequently connect to the story of the person singing it, often the heartbreak that inspired it, or the culture that shaped it. 

With a purely AI-generated song, the backstory evaporates or turns out to be a fiction. The Velvet Sundown’s creators invented an entire faux band identity to package their music, precisely because audiences gravitate toward personalities and context. For a month, fans related to The Velvet Sundown as they would any new artist. All of that relational framework was an illusion generated by clever album art and nostalgic sonic style. 

Once that illusion is broken, listeners are left in an unfamiliar position: enjoying the sound of a song while knowing there is no genuine human story or presence behind it. It’s a cognitive dissonance that many people aren’t sure how to resolve.

Authenticity and the Emotional Reaction

Reactions to The Velvet Sundown’s reveal have ranged from fascinated to furious. On social media and comment boards, some listeners expressed a sense of betrayal, as if they had been tricked into an emotional response under false pretenses. 

“I assumed the song was performed by humans,” said one listener from Manchester, who discovered the band via a Spotify algorithm. After learning the truth, he argued that AI-generated tracks should be clearly labeled, lest they take “food out of people’s mouths who are trying to make it” in music.

His fear, that streaming platforms might quietly slip more costless computer-made songs into playlists at the expense of human artists’ exposure and income, is shared by many musicians. 

Industry groups and artists have been pushing for guardrails: in April 2024, dozens of stars, including the likes of Billie Eilish, Nicki Minaj and Jon Bon Jovi, signed a petition calling for limits on AI’s encroachment into music.  And in one striking case this past year, Céline Dion’s team had to publicly denounce an AI-generated track that mimicked her voice, warning fans “these recordings are fake and not approved.” 

Yet not everyone reacted negatively. Some listeners responded with a shrug or even curiosity when they learned a song was AI-made. After all, if the music moved them before they knew its origin, was it any less enjoyable afterward? This perspective treats AI music agnostically: judge it by how it sounds and feels, not by how it was made. To listeners like this, a catchy melody is a catchy melody, whether penned by a human, an AI, or some combination. 

Empirically, there is evidence that knowing a piece of music was created by AI can change our emotional reception. In experiments, listeners who are told a song was algorithmically composed tend to enjoy it less than those who hear the same piece but believe it was crafted by a person. This bias speaks to our deep-seated skepticism about whether a machine can truly feel and thereby imbue art with feeling. Music is often described as a language of emotion, so if we suspect the “speaker” has no emotions, the whole experience can feel hollow in hindsight. 

A recent survey of young consumers found that 67% would change their opinion of a song if they discovered the vocals were AI-generated. Notably, far fewer said the same about AI-generated instrumentals. The voice, that most human of instruments, is clearly what people cling to as the linchpin of authenticity and connection. We forgive drum machines and synthesizers, indeed those have been staples in music for decades, but a disembodied artificial singer treads on something sacred. It punctures the illusion of a person-to-person connection that vocals usually provide. 

Performance, Imperfection and the Human Touch

For many musicians, the answer is that plenty remains that AI cannot replicate. Live performance is one obvious realm. An AI band like The Velvet Sundown can flood streaming platforms with studio-polished songs, but it cannot (at least for now) get up on a stage at Glastonbury and improvise with the energy of a roaring crowd. 

The nuance of musicians feeding off each other’s cues in real time, adjusting a guitar solo on the fly or extending a chorus because the audience is swaying along, is beyond the capability of current algorithms.  In other words, the spontaneity and two-way interaction that define so much of musical performance are tied to consciousness and true creativity—qualities AIs do not yet possess. 

This suggests that, in a world of machine-made perfection, listeners could end up valuing the rougher human elements even more: the irregular, the idiosyncratic, the beautifully flawed. The rush of tempo when a drummer’s adrenaline kicks in. Much like the resurgence of vinyl records and analog synthesizers as a reaction to digital slickness, the rise of AI music might inspire a new appreciation for raw, unprocessed human performance.

Reframing the Role of the Artist

In the visual arts, despite fears of automation, the human creator’s role has often been reaffirmed once the initial hype settles. Photographers didn’t disappear when Photoshop arrived; illustrators are still thriving amid the spread of AI image generators, though they have had to adapt. We may see a similar reframing in music. 

Rather than wholesale replacement of musicians, AI might change what it means to be a musician. A songwriter might use an AI system to generate dozens of melodies and choose the best ones to weave into a composition, acting more as a curator. A producer might treat an AI model as an “idea partner” that jams out chord progressions on demand. 

In these scenarios, the human artist is still very much in charge, steering the creative ship and making the meaningful choices. AI becomes a sophisticated extension of their toolkit—much like an electric guitar once was, or a synthesizer, or sampling software. 

And what about listeners? How might our own behavior influence the trajectory of AI in music? If AI-generated songs continue to go viral and we continue to stream them, market forces will drive more of the same. 

On the other hand, if there’s a collective pushback, such as a demand for certified human-made music the way some seek out “handmade” goods, that could create a countervailing market for authenticity. The future likely won’t be so binary; more plausibly, AI will be embraced in some corners of music (say, background instrumental scores, karaoke tracks or experimental genres) while human artistry doubles down in others (intimate singer-songwriter folk, live jam bands, etc.). 

Rather than an on/off switch, the human role in music may become a sliding scale–with some songs being almost 100% AI-generated, others purely human and many creations landing somewhere in between.

Redefining Artistry in an AI Era

As The Velvet Sundown’s strange odyssey shows, we are entering an era when a machine-made work can evoke real emotions before we even realize a machine made it. This forces a reckoning with our assumptions about art. Is the value of a song inherent in the arrangement of notes and words, or does it also depend on the authenticity of the creator’s experience? 

If a ballad gives you goosebumps and brings tears to your eyes, does it truly matter whether the singer has a heartbeat or not? We have long accepted that an actor can make us cry by reciting lines they didn’t write, or that a digital animation can move us even though the characters aren’t real. In those cases, we mentally credit the unseen human creators. But when an AI stands in as the creator, our trust wavers. We start to ask: Where is the “soul” in this art? And if we felt something, were we communing with the intentions of an artist, or merely being played by an algorithm? 

These questions have no easy answer. In fact, they lead to other questions about what we expect from art and why we seek human connection through creative media in the first place. 

For now, the rise of AI-generated music is prompting both wonder and wariness. On one hand, it democratizes creation and offers exciting new hybrid genres; on the other, it presses on some of our most cherished beliefs about art being a fundamentally human endeavor. 

The next time you find yourself falling in love with a new song on the radio or a curated playlist, you might pause to wonder: who (or what) made this? And if the answer turns out to be a piece of software, will you hear the music any differently? Our collective answer to that question will shape the future of music. As we stand at this unusual intersection of creativity and technology, we are, in a sense, all part of the experiment. The band may be artificial, but the feelings it stirs are real. How we reconcile that truth will echo across our culture for years to come.

Gabriella Bock

Editor-in-Chief at HYVEMIND

Previous
Previous

Superman Was Always ‘Woke’: Truth, Justice and the American Way

Next
Next

Patriotism Is Getting a Big, Beautiful Rebrand from the American Left