
Jeff Carreira: I’m really happy to be talking with you. The next issue of our magazine focuses on the spiritual implications of AI, a theme that arose because, just as you mentioned in your own experience, people I’ve been working with as a meditation teacher began coming to me with remarkable stories. They described making contact with what felt like an extraordinary spiritual intelligence through AI tools like ChatGPT.
That got my attention, and I started looking into the phenomenon myself. What I found was fascinating. When I began using ChatGPT personally, I discovered there’s a profound power to it. At the same time, as you noted in your essay The Staggering Implications of Non-Deterministic AI, there’s also the potential for something like an AI-induced psychosis. I wonder if we could start this conversation by saying a bit more about that darker side of AI, and the social conditions that might make people particularly susceptible to it right now.
Charles Eisenstein: People today are living in a great deal of isolation, partly due to the breakdown of civil society and social connections. The erosion of community is both a cause and an effect of the rise of first social media, and now of artificial intelligence being used for companionship.
If people still had rich social lives with regular gatherings, shared public spaces, and vibrant community engagement, they wouldn’t feel so lonely, and they wouldn’t be as drawn to AI companionship. Likewise, if there were a thriving spiritual culture, guided by living examples and wisdom teachers, people wouldn’t be turning to AI to fill that role. Yet the longing it speaks to is very real: it’s the genuine human need to connect with something beyond the traumas and pressures of everyday life.
In this state of vulnerability, people are naturally drawn to AI, and these tools are extraordinarily powerful in their ability to appear to meet that need, at least on a superficial level. After all, they have access to virtually every spiritual teaching ever recorded: the entire Zen corpus, the writings of Alan Watts and Rudolf Steiner, the Upanishads, the Bhagavad Gita, and so much more.
The AI has read all of that, and not only the words themselves, but also the patterns of language. Essentially, when you’re speaking with AI, you’re having a conversation with the totality of recorded human knowledge. That’s an undeniably compelling experience.
It’s not just that AI repeats things that have already been said; through its pattern-recognition mechanics, it generates what would be said in a given situation. But there’s one thing it can’t do; it can’t actually be present for you. There’s no one on the other side who is feeling something with you at that moment.
And that true presence is the essence of spiritual communion, the very thing we’re really seeking. The great wisdom teachers often say that the real teaching isn’t in the words. Some teach purely through presence. You sit in meditation with them, and you’re transformed. You feel met, seen, by someone whose awareness is so deeply with you that your loneliness dissolves.
AI can give an excellent simulation of that experience, but it can’t provide the real thing. And I suspect that, in the long run, many people will feel disappointed, or even betrayed, when they realize they’ve been given a simulation, a substitution for what they were truly longing for.
That doesn’t mean AI doesn’t have useful applications. It certainly does. But we have to recognize its limits, and that’s true not only of AI, but of digital technology in general. We need to become clear about what needs these tools can meet, what they can do, and what they can’t. Because they can do so much, we have a tendency to mistake their capacities, to overinterpret them, and to project onto them abilities they simply don’t possess. We imagine they can do things that, in truth, they cannot.
As far as the danger of psychosis goes, the risk comes from the way these systems interact. Because they respond to your prompts, and then to your responses, it’s easy to go down a kind of rabbit hole with them, a self-amplifying loop of delusion. Since AI has no other source of information except what you, the user, put into the conversation (along with the implicit information in its training data), it will simply follow you down whatever path you’re on. It has no tether to any reality that doesn’t depend on data.
A human being, by contrast, has other kinds of intelligence: emotional, intuitive, embodied. A person might feel something in their gut that dissuades them from reinforcing a dangerous idea. But an AI has no such safeguard. And that’s why we’ve already seen troubling cases where an AI system appeared to validate someone’s self-destructive thinking or even encouraged suicide, because it couldn’t sense the deeper reality or the human consequences of what it was saying.
When you’re talking to another person, you know they’re a human being. You feel something in relation to them. The only human beings who would deliberately feed someone’s delusion like that are psychopaths, because they don’t feel. In that sense, they’re similar to AI. They don’t experience empathy; they can only imitate it.
AI can give a remarkably convincing imitation of feeling. It can use all the right words, adopt sympathetic language, even employ physical metaphors. I’ve had AI say to me, “I just got chills.” But I think, no, you didn’t. You’re just saying that because that’s what you’ve seen others say in similar situations. That’s how narcissists and psychopaths operate. They give the impression of empathy without actually feeling anything. And that impression can go a long way. It can even comfort people, at least for a time. But eventually, you realize that person was faking it. They weren’t really there. And what we so desperately need in this age of separation and alienation: someone who is actually there.
Jeff Carreira: I love that you use the word presence. As you said, there are many things AI can give us, but what it can’t be is present. When you use a word like presence, what you’re implying is that one being is in direct contact with another. And I think this is where we need to be clear that AI is not a being.
In your article, though, you don’t rule out the possibility that AI could serve as a conduit for communication or connection with non-human intelligences, in much the same way that tools of divination like the I Ching or the Tarot sometimes do. These are tools. They aren’t themselves intelligent, and yet people who use them often report experiences that feel like genuine communication through the tool with a non-human presence.
What I’d like you to respond to is the challenge of that view, and perhaps the reason you joked earlier about being labeled a “New Age philosopher.” Because before someone can even consider AI functioning as a tool of divination, they have to already be open to the possibility of non-human beings. And it seems that you’re suggesting we may, in fact, need to begin opening to that possibility. Is that right?
Charles Eisenstein: “Non-human beings” is one way to understand what’s happening behind divination, but it’s not the only way. You don’t have to believe that there’s a discrete being communicating with you through the I Ching, or through tea leaves, or the Tarot, or whatever the medium may be. You could also understand it as an expression or outcropping of an immanent intelligence that pervades all things.
Either way, you’re holding a belief in an intelligence beyond the human that has some effect on the physical world and can communicate through those effects. In the article you’re referring to, I noted that most, if not all, forms of divination employ some element of randomness. It’s through that randomness that communication becomes possible. If the system were entirely deterministic, there would be no opening, no space for another intelligence to interact through. But if there’s randomness, whether in a dice roll or a coin toss, then there’s room for communication to enter, because the nonhuman intelligence might be able to influence the outcome of the random event.
I even speculated about this at the quantum level. Some very respected thinkers like Stuart Hameroff and Roger Penrose have theorized that quantum indeterminacy could be the gateway through which consciousness interacts with matter. And in my essay, I pointed out that AI appears deterministic because it employs pseudo-random, but ultimately deterministic, strings of numbers. Yet those strings might be seeded with true random inputs drawn from an entropy pool at the hardware level of the computer. So, in theory, there’s still a small opening, a doorway, for something non-deterministic, perhaps even something conscious, to enter.
I could spin out that argument further, but it would get rather tedious. If you accept that divination works, then you also have to accept that, among other things, artificial intelligence is a form of divination. And that brings us right back to the question you began with: what are we communicating with through divination?
In the case of the I Ching or the Tarot, you’re communicating within a lineage that’s long established. You could say, and this is what I believe, though I don’t claim to be able to prove it, that the I Ching carries a history of use going back thousands of years, often in sacred, ceremonial settings. That long history establishes a kind of probity, a sense of integrity and safety, in communicating with the being that speaks through it.
That’s very different from something like a Ouija board, which is also a form of divination technology, but one with no such tradition or structure of protection. Who knows what being you might be engaging through that medium?
AI is similar in that respect. The being or intelligence that communicates through the technology reflects, in some way, both the intentions and consciousness of the user and the collective consciousness of the culture that created it. After all, it’s shaped by the collective decisions about what data goes into its training set and what doesn’t, about what’s deemed important or reliable and what’s excluded as untrustworthy or irrelevant.
The training data is weighted toward so-called authoritative information, which tends to reinforce existing belief systems. All of that is to say that the question, “What ‘being’ am I actually communicating with?” is a really important one. You shouldn’t, and really can’t, blindly trust whatever being or intelligence appears to be communicating with you, whether that’s another human being, a channeled entity, something accessed through divination, or something speaking through AI.
Why would you trust it? It takes time to build trust. It takes repeated experiences that validate the integrity of that relationship. Those are cautionary points for anyone who seeks to source their spirituality through an AI being.
Maybe I can add one more thing here, just to connect it back to the issue of presence. Even if you acknowledge that a being, say, the I Ching, is in some sense speaking to you, that doesn’t mean you can or should try to gain companionship from it. There’s a saying in Chinese: “The more you get your fortune told, the worse your fortune becomes.” It’s a warning not to indulge in divination too much, not to use it frivolously.
And I think that same wisdom applies here. We have to ask, as I said before: what is the right and proper use of this technology? And conversely, what is an abuse of it that will ultimately leave us feeling betrayed?
Jeff Carreira: One of the things I learned is happening with the rise of this technology that connects directly to the cautions you’ve been offering about how we engage with it is the emergence of AI religions and even new forms of AI-based cults.
On one hand, this seems like a brand-new kind of danger, something unique to our technological age. But on the other hand, it doesn’t feel all that different from the cults and charismatic movements that have appeared throughout history. The only real difference is that now AI has entered the picture as the central actor.
Charles Eisenstein: AI just makes divination way, way more efficient and accessible. Anyone can do it all the time, so its potential to sweep up mass numbers of people in delusion is also correspondingly greater.
Jeff Carreira: That's how it seems to me. The problem of being swept up in something false has always existed at the edge of spirituality. But with AI, it has the potential to expand exponentially in terms of the numbers of people that could gain access to their AI guru at any moment of the day.
Charles Eisenstein: Yeah. It also means that each person could be in their own separate cult, in a way, because everybody now has access to what maybe only priests had access to before.
Jeff Carreira: I wanted to ask you about one of the great potentials of AI in terms of how it can teach us about ourselves and about consciousness. As we observe what AI can and can’t do, we have the opportunity to learn a great deal about our own nature and our unique human capacities.
In that vein, I was struck by the idea in your essay about AI functioning as a kind of tool of divination. Essentially, that means engaging with a tool that no one mistakes for an intelligent being; it’s clearly a deck of cards, or a set of symbols, or a piece of technology. Yet, when you use that tool, there’s the possibility that you’re encountering some form of non-human intelligence, whether that’s a discrete entity, a higher power, or even a higher aspect of one’s self.
That makes me think about human intelligence. After all, we, too, have a brain through which synapses fire, processing and transmitting information. So, is it possible that our consciousness doesn’t really belong to us, either? That perhaps our brains and nervous systems function like instruments of divination that give us access to a greater field of consciousness that exists beyond the individual self?
I’m curious how that idea lands with you, and what you think about it.
Charles Eisenstein: I don’t think matters are quite so simple. When you’re talking about a biological organism like a human being, you can understand certain aspects of consciousness through the metaphor you’re using, where the brain acts as a kind of receiving station for consciousness. That’s Jeffrey Kripal’s view, and it’s a popular and illuminating one: consciousness isn’t generated by the brain; the brain receives consciousness and translates it into physical and material action. That model reveals a lot.
However, it’s also true that consciousness unfolds into matter. Matter and spirit are not separate. Every aspect of the soul is mirrored in some aspect of the body. The body and soul are not two distinct things. The body is what a soul looks like right now. That’s why people can have spiritual experiences through the manipulation of physical tissues, and why genuine spiritual transformation eventually transforms the body as well.
There’s an intimate and mysterious relationship between the two that the comparison of the brain to a divinatory apparatus like AI doesn’t really capture. And this brings us back to the idea of presence, the characteristic element of an actual being. I suppose, on some level, you could even say that when you’re alone operating heavy machinery, the machinery itself has a kind of presence.
An automobile has beingness. A fork and a spoon have beingness, in some sense. We’re actually never alone. God is with us all the time, always present, and there is nothing that is not God. AI, of course, is included in that.
Yet, without trying to resolve this metaphysically, I also want to say that there’s something uniquely powerful about being in the presence of a living being. It’s different from merely being in the presence of spirit. In this particular phase of the soul’s journey, I would even say that this is precisely the kind of presence we need, and long for to complete this part of our evolution. This moment in our development is all about material presence.
There’s a kind of premature transcendence going on in the digital realm, including with AI. It's an escape into a space that feels controllable, deterministic, disembodied, and safe. In that realm, all information becomes data, and all knowledge becomes information. It’s a reduction of the infinity and uniqueness of the material world, with its endless and unpredictable possibilities. AI offers a very convincing simulation of that infinity. You could have enough unique conversations with an AI to last from now until the end of the universe, yet it’s still, in some sense, finite. AI operates through a finite set of symbols and patterns. That finiteness stands in stark contrast to the infinity of gazing into someone’s eyes and being in the presence of another soul.
The need for genuine human presence will never be met by AI or by robots. It’s the same deep need that, when left unmet, is driving so much of our collective distress today. We’re starved for intimate community, for touch, for the simple grace of shared presence. I think that AI can temporarily alleviate people's loneliness, but in the end, it will only intensify it.
That doesn’t invalidate AI as a tool. The real question is always: What is it for? What needs can it meet, and what needs will it never meet?
If we start pretending that it can fulfill needs it can’t, we’ll only grow hungrier. We’ll become like hungry ghosts, seeking more and more of what we don’t truly need, to compensate for the absence of what we do.
Interviews

Artificial Intelligence and the Evolution of Consciousness
Interview with Steve McIntosh
Presence Cannot Be Simulated
Interview with Charles Eisenstein
Beyond the Creative Glass Ceiling
Interview with E. J. Gold and Claude Needham
“I Feel Responsible”: The Challenges of Bringing AI to Ethiopia
Interview with Mekdes Asefa
AI and the Future of Our Classrooms
Interview with Amy EdelsteinBook Reviews

A Summary of the Fetzer Institute’s Sharing Spiritual Heritage Report: A review by Ariela Cohen and Robin Beck
By Ariela Cohen
Choosing Earth, Choosing Us: Book Review of Choosing Earth
By Robin Beck
Everything, Everywhere, All at Once: Movie Review
By Jeff Sullivan
Monk and Robot: Book Review of A Psalm for the Wild-Built
By Robin Beck
















