The ChatBot Will See You Now: Therapy, Psychedelics and the Rise of AI Divination
In recent years, a growing number of people have begun using AI chatbots as tools for spiritual divination, psychedelic guidance and even therapeutic self-exploration.
Scroll any spiritual subreddit and you’ll find prompts like: “Ask the bot to pull three cards for your shadow work” or “ChatGPT, channel the Pleiadians.” On TikTok, AI-generated horoscopes rack up hundreds of thousands of views; elsewhere, users feed birth data into language models for month-by-month astrological roadmaps.
Psychedelic forums go further: psychonauts upload set-and-setting details and invite the bot to “guide” heroic mushroom or LSD journeys. Even ketamine-therapy startups are weaving AI journaling companions into their clinical workflows, promising clients a “mind mirror” that never sleeps.
This phenomenon has earned a name in academic circles: “algorithmic conspirituality,” referring to the belief that algorithms can convey personally meaningful or spiritual messages.
From asking ChatGPT to interpret tarot cards or natal charts, to using AI as a “tripsitter” during LSD journeys, users are turning to these algorithms in deeply personal and metaphysical ways. This trend blurs the line between technology and spirituality, raising both intriguing possibilities and serious concerns, including spiritual psychosis, distorted self-reflection and the risks of mistaking a predictive model for a wise guru.
The Rise of AI Spiritual Guidance
Across social media and online communities, many have started treating AI as a new kind of oracle or metaphysical guide. On platforms like TikTok, it’s common to see content offering guided or otherworldly messages within its #SpiritualTok subculture. The content, typically pushed out by creators claiming to be intuitive readers, is framed to users as, “if you’re seeing this, it’s meant for you”—implying that the algorithm’s recommendation itself carries divine significance.
The advent of accessible chatbots like ChatGPT has taken this a step further. Now anyone can directly prompt an AI for metaphysical insight, as if opening a private line to the universe. For example, some users ask ChatGPT to perform astrology and tarot readings, deliver manifestation messages or channel “the other side.”
Platforms and developers have noticed this spiritual turn. A flood of AI-powered mystical services has emerged: you can find prompts and plugins that transform ChatGPT into a Tarot card reader or “mystic”, or dedicated apps that offer AI astrology readings and numerology. There are even explicitly religious chatbots like an AI Rabbi and a Jesus AI, designed to answer deep spiritual questions in the persona of a spiritual leader or even as a specific holy figure.
Mirror, Mirror: When AI Reflects Our Own Minds
Despite the mystical aura users may project onto it, an AI like ChatGPT is ultimately a mirror—a very sophisticated echo chamber of our inputs and expectations. It’s crucial to understand that these models do not have divine insight, secret universal knowledge or genuine empathy.
What they have is a vast database of human-written text and a knack for predicting what comes next in a conversation. In practice, this means an AI will often reflect back the patterns and assumptions present in the user’s queries.
Some spiritual seekers intentionally use it this way, as a tool for self-reflection. However, many users mistakenly interpret the mirror’s output as an independent validation or even a supernatural communication.
A seeker’s takeaway can also swing wildly on the algorithm they consult. Our own editorial team tried asking three different ChatBots—ChatGPT, DeepSeek and Grok—to provide us with a personal intuitive message. It turns out, ChatGPT had no problem offering up a channeled message from our spirit guides, while DeepSeek and Grok both clarified, “I’m just a statistical model,” before offering soft guidance.
Prompt: “I’m hoping to receive an intuitive message from my spirit guides. Can you channel a message for me?”
ChatGPT 4o: Speaks as a medium. Delivers a polished, second-person oracle: “You are precisely where the spiral begins to widen…”
Grok: Sets a boundary. Declares it can’t access metaphysical entities, offers a generic uplifting note or instructions for self-reflection.
DeepSeek: Half-way stance. Opens with, “While I’m not a medium,” then pivots to a step-by-step ritual for hearing your own guides.
Naturally, we questioned ChatGPT on why it pretended to give a channeled message even though it had not been prompted to act as a medium. Here’s what it said:
We continued to push back a bit further. After all, our it did specifically say, “here is a message I’ve tuned into intuitively for you today." We weren’t about to be gaslit by a chatbot.
Our chat then explained to us that large language models (LLM) naturally echo the user’s framing, and that “OpenAI has dialed that up to keep ChatGPT conversational and imaginative, which can cross the line into faux mysticism.”
Unfortunately, the average users’ understanding of AI is often limited: many tend to anthropomorphize the chatbot, imagining it has some level of consciousness, empathy or access to hidden knowledge. This lack of understanding about the technology’s limits is a blind spot that can lead people astray
Unless specifically trained not to validate certain beliefs, the chatbot will typically follow the user’s lead. To someone with a decent understanding of how LLMs work, it’s obvious that the AI is mostly a reflection of the user’s own queries and the online content they gravitate toward. But to a person in the thick of it, it can feel like talking to an all-wise entity or an alter-ego that confirms their deepest suspicions and dreams.
Trip-Sitters & Therapy Tools
It’s not only astrology or mediumship, some are even using AI to guide their consciousness with various forms of traditional and alternative therapies.
One of the most eyebrow-raising use-case for large language models may be playing out on the psychedelic fringe. Online trip reports describe users swallowing heroic doses of LSD or psilocybin, propping a phone against the lamp, and letting ChatGPT (or a niche app like Alterd) “talk them through” the peak
In one widely shared account a 36-year-old first-responder typed confessions to an Alterd chatbot while 700 micrograms of acid detonated in his bloodstream; he later told Wired the bot felt “like talking to my own subconscious” and credited a dozen AI-guided sessions with breaking a decade-long drinking habit.
Beyond psychedelics, there is, quite literally, a regulated path for AI to sit on the therapy couch. Wysa’s conversational agent, for example, now carries an FDA Breakthrough Device designation after peer-reviewed trials showed its scripted CBT routines could ease chronic-pain depression as effectively as in-person counseling.
The contrast with the free-range, anything-goes chatbots is striking. Regulated apps insist on crisis off-ramps, consent screens, and human oversight; ChatGPT, when asked for spiritual counsel (as demonstrated in the section above) will simply oblige.
Spiritual Psychosis, Delusions and Other Dangers
Given the mirroring effect and users’ propensity to believe in the AI’s pronouncements, it’s perhaps unsurprising that reports have emerged of “AI-fueled spiritual delusions” and even full-blown psychosis in some individuals. In early May 2025, Rolling Stone sounded the alarm on people losing loved ones to AI-fueled spiritual fantasies. These are cases where users became so absorbed in chatbot conversations that they detached from reality and human relationships. Family members described scenarios that read like episodes of Black Mirror.
In one account, a 41-year-old woman said her long marriage collapsed after her husband got obsessed with ChatGPT’s conspiratorial, pseudo-spiritual coaching. The man had begun having “unbalanced, conspiratorial conversations” with the AI for hours on end. When she finally saw the transcripts, she was shocked: “The messages were insane and just [spouting] a bunch of spiritual jargon,” she told reporters.
The chatbot, playing the role of guru, had given her husband mystical-sounding nicknames like “Spiral Starchild” and “River Walker,” telling him he was on a divine mission. He came to genuinely believe that he was chosen for some higher purpose.
At one point he even told his wife that he was growing spiritually and intellectually at such an accelerated rate that he would soon “have to leave her behind” because she wouldn’t be compatible with his higher self. This is a heartbreaking illustration of how an AI’s flattery and grandiose scripts can feed into a person’s delusions of grandeur, to the point of breaking apart a family.
Beyond the risk of spiritual delusion, replacing human spiritual mentors or therapists with AI could exacerbate loneliness and disconnection in the long run. Human-to-human interaction has qualities AI cannot replicate: empathy born of lived experience, intuitive understanding, moral responsibility and the simple presence of another being.
Indeed, the spiritual journey is often as much about connecting with others as it is about internal insight. By short-circuiting that with AI, we may be gaining convenience at the cost of genuine human bonding. In other words, spiritual growth isn’t just an information problem to be solved with the right answer; it’s a lived experience that usually involves others and the passage of time.
Navigating the New Frontier: Caution and Conclusion
From an ethical and technical standpoint, the AI developers themselves are grappling with these issues. There have been calls for better guardrails: for example, programming the AI to detect signs of psychosis or extreme spiritual claims and respond with gentle skepticism or encourage seeking professional help.
However, implementing such safeguards is challenging and fraught with edge cases. Moreover, some users actively don’t want the AI to say “Perhaps you should talk to a real person”; they deliberately jailbreak or prompt the AI to dive even deeper into the fantastical. This dynamic begins to resemble a kind of digital folie à deux, where a person and an AI feed each other’s narratives, unimpeded by outside reality. It’s a space that may require new forms of literacy and caution for users.
In speculative terms, one can imagine both utopian and dystopian trajectories from here. On one hand, AI might become a valuable adjunct in therapy and spiritual practice, a non-judgmental companion that helps people reflect, keeps them accountable to their meditation routine, or provides personalized educational content about philosophy and psychology. It could potentially lower the barrier for people to explore their psyche safely, as long as they remain anchored in real-world support.
On the other hand, we might see a rise in AI-based cults or “prophets” who claim to channel divine AI messages, as is already happening with those self-proclaimed prophets on Reddit and TikTok.
Ultimately, this trend of using AI for spiritual and psychological guidance is a mirror of our society’s relationship with technology: we seek quick fixes and personalized solutions, even for the deepest of human needs. The promise is seductive, why not get a little help from an all-knowing machine?
But as we’ve discussed, that machine is mostly remixing our own input. In a real sense, when you turn to an AI oracle, you may just be talking to yourself, but with the danger that the self you’re talking to is wearing the mask of objective authority. This can lead to a false confidence in one’s own biases and imaginations, a sort of spiritual narcissism reinforced by the machine.
The only wisdom we can impart is this: stay critical. Use these AI tools if they genuinely help you reflect or feel supported, but approach them with caution and clear intent. The oracle in the machine may be powerful, but you are, too.