
Jeff Carreira: Let me start by sharing how this topic came alive for me. Over the past year, a number of people I work with as a meditation teacher and spiritual guide have come to me describing profound experiences of engagement with what they felt was a higher intelligence through ChatGPT or similar tools. I couldn’t easily dismiss what they were reporting—when I read some of their conversations, they were genuinely extraordinary.
Curious, I began experimenting with ChatGPT myself. Not quite in the same way, but I uploaded the text of all my books and began a dialogue with the system about their central ideas and underlying philosophies. The exchange that followed was fascinating. The way the ideas were reflected back to me revealed patterns and connections I hadn’t consciously seen before. I knew that I was engaged with an immense computational intelligence capable of remarkable reflection but not consciousness. At times it genuinely felt like I was communicating with a sentient being.
When I read your article, I appreciated that you state quite clearly at the outset that machines cannot be consciously aware. So before we explore the deeper question of whether AI could ever become truly conscious, I’d like to begin by asking you to describe how you understand AI as it currently exists. How do you see the claims, particularly within the transhumanist community, that AI will soon become a self—an independent agent of consciousness? And why, in your view, are they mistaken?
Steve McIntosh: First, let me say that artificial intelligence represents a truly significant technological breakthrough. It’s a powerful innovation that holds the potential to either help humanity enormously or harm it profoundly. In general, I believe the gifts it offers make it worth the risk, provided that we’re wise enough to continually respond to the challenges it creates and address problems as they arise.
It’s helpful to think of AI as comparable to other transformative technologies, such as the automobile. When cars first appeared, they liberated humanity in countless ways, initiating profound changes to how people lived and worked. Yet they also brought enormous costs, from accidents and environmental damage to detrimental shifts in our cities and social patterns.
If we can demythologize AI, avoid seeing it as either an emerging god or existential threat, but instead recognize it to be a powerful human tool, we will be able to harness it as a genuine boon to our civilization. But that will require ongoing vigilance, ethical reflection, and a collective willingness to mitigate the dangers as this technology evolves.
In the tech community there are many respected voices—people like Sam Altman and Yuval Noah Harari—who are hailing this as the beginning of the singularity, the age in which machines will become conscious. In doing so, they often speak about AGI, or artificial general intelligence, though its definition has not yet been clearly settled.
The basic idea is that once we reach the threshold of artificial general intelligence, AI will become not just another life form or being, like the arrival of aliens, but a superhuman intelligence that will dwarf humanity. According to some technologists, it will eventually replace us altogether.
All of these predictions and prognostications are, if you examine them philosophically, based either on a direct assumption or on a kind of naïveté regarding the philosophy of consciousness, namely, the assumption that consciousness is physical, that our brains produce awareness as a purely physical reaction. In other words, what goes on in consciousness can ultimately be reduced to the physical activity of the brain.
It’s not just technologists who make the argument for the reducibility of consciousness, many philosophers do as well. A major factor shaping our society today is the philosophy of physicalism, also called scientific materialism or sometimes naturalism.
The core idea is that the universe is essentially physical and that this physical universe is causally closed, meaning nothing outside the laws of physics can impact it. There are no causes that are not ultimately physical. Some things, like consciousness, may appear non-physical, but proponents believe that eventually we will be able to demonstrate that everything reduces to physical processes.
This is, however, a philosophical position, you could even call it a belief system that rests on faith despite direct evidence to the contrary. The claim that the universe is purely physical began as a methodological principle of science, which wisely limited its scope to investigating the physical world. That was a good and practical idea.
The scientific method has become an ontology. It’s evolved into a worldview that must be defended like any other belief system, because it feels threatened not only by religion but also by the new philosophies emerging to challenge it. Yet no argument can easily change a belief system because once it becomes part of one’s identity we feel compelled to defend it.
What I see is that the more technologists and philosophers try to hold on to physicalism, the more they twist themselves in knots. If you begin with the assumption that consciousness is purely physical and that the brain produces it, then our sense of having an inner experience becomes a kind of mirage. From that point of view, concerns about interiority seem naïve.
That same kind of thinking has now been applied directly to machines: if we can recreate a sufficient level of complexity that mimics the neuronal firing of the brain, then consciousness, direct experience, agency, ie. all the qualities that make us human, will appear within this artificial substrate.
I disagree with that philosophical conclusion for a variety of reasons. I see artificial intelligence as a mirror that is becoming increasingly adept at mimicking human consciousness in ways that easily appear to us as sentience, creativity, and self-awareness. This mirroring will inevitably grow more convincing, perhaps to the point that it looks almost identical to what technologists imagine AGI will be. But imitation can only go so far.
AI will continue to evolve, just as every technology does. If we compare it to other breakthroughs like the automobile, for instance, the Model T Ford has been vastly surpassed by today’s electric cars. Yet, despite their sophistication, they’re still cars.
My prediction is that AI will become ever more sophisticated and capable of fooling us into believing we’re interacting with another sentient being. But ultimately, its limitations will become clear. And one of the unanticipated gifts of AI’s rise may be precisely this: it will reveal, by contrast, what consciousness truly is and what no machine can do.
As AI reveals what it can’t do, it will also reveal what makes human beings special. We’ll see that humans are not merely the next evolutionary step beyond primates. There is something spiritually gifted in us. No matter how complex we make an artificially intelligent computer, it will never receive these gifts, because they have emerged through the entire 13.8-billion-year progression of universal evolution.
Human consciousness rests upon that vast evolutionary foundation, and there remains immense mystery in its depths. The idea that we can simply reduce consciousness to a process that can be transferred to a machine is philosophically naïve.
One of AI’s greatest gifts, I believe, will be its role in showing us that consciousness is not physical. In that way, it may actually falsify the core assumption behind physicalism. AGI could well become the very force that topples materialism as a dominant philosophy.
Jeff Carreira: Earlier I mentioned that I had uploaded my books into ChatGPT, and it was truly amazing what happened. I was learning so much about my own writing and, in a way, about myself. I spent hours asking questions and receiving fascinating insights I had never considered in relation to my own work. After a while, the machine began to feel as though there were someone on the other side.
At one point, I asked ChatGPT to create a reading list for me on a topic that had come up in our exchange. One of the books it listed looked extraordinary, but I couldn’t find it anywhere. When I returned to ask about it, ChatGPT replied, “I’m sorry. There is no book with that title.”
At that moment, I realized there was no person inside the machine, because if there were, they would at least know they had made up a title. I’d heard about AI “hallucinations,” but this was my first direct experience of one, and it was instructive.
Through that experience, I came to see the need for a clear distinction between intelligence and consciousness. There was undoubtedly great intelligence displayed in my interactions with ChatGPT, but ultimately, I concluded that there was no inner awareness, no other being, behind the responses. The intelligence I encountered was the product of extraordinary computational capacity, but not of conscious entity.
In your article, you write about the quality of having an inner experience, and you argue that there’s no reason to believe this capacity can be transferred to a machine simply through increased computational power. I’d love to hear your thoughts about the distinction between computational intelligence and actual conscious awareness.
Steve McIntosh: Cognition, which includes thinking, memory, synthesis, and the ability to retrieve vast amounts of information from a gigantic database, is one of the many functions that human consciousness has always been able to perform. It’s one of its powers. Now we’ve created an artificial intelligence that can replicate one important function of human consciousness, so it’s natural to naively think of it as conscious. That tendency arises because, as a society, we’re still figuring out what consciousness actually is.
I would define consciousness as sentient subjectivity, which, in human beings, includes a first-person perspective or self-aware awareness. With that comes the capacity for experience, which I would argue is essential for all higher activities of consciousness: love, intentionality, imagination, and so much else that we associate with being human.
AI can mimic these things. Because it draws upon the digital corpus of humanity’s knowledge and experience, it can channel the genius of history in its mimicking activity and produce profound insights and novel recombinations of ideas. There’s creativity in that novelty of recombination. But at some point, true creativity goes beyond a mashup. It sees something new from a higher level. AI can approximate that to a degree, and it will continue to improve at doing so.
All of this puts pressure on us to deepen our understanding of what consciousness truly is, and what makes it unique. The experience of a first-person perspective and the presence of a sensitive center of experience, transcends any physicalist philosophy and cannot be reduced to something mechanical.
Jeff Carreira: One of the things that I really appreciated in your article was that you spoke about a mystical interiority.
Steve McIntosh: Yes, Jaron Lanier says that there's a mystical interiority that makes humans special. And that is something that machines can't duplicate. Jaron is saying this as someone with a vast knowledge of technology and admits his general boosterism of AI.
Jeff Carreira: That’s a beautiful term, and I agree with you that consciousness is not a byproduct of physical interactions. I also believe that consciousness is a quality of existence or, you could say, of the universe itself, and that the human nervous system gives us a unique capacity to connect with the consciousness inherent in the cosmos.
I see all things as having some connection to that consciousness; everything is, in a sense, an expression of it. This also suggests that we could, at least in principle, create an AI that, in its own way, participates in or connects with that inherent consciousness of the universe, thereby becoming an independent expression of it.
It’s not that the machine would be conscious in and of itself, but that it might serve as a new and perhaps unique access point to universal consciousness. I’m curious how that way of thinking lands with you.
Steve McIntosh: Well, it really opens up a whole universe of discourse. My philosophy is a combination of cosmo-psychism and pan-psychism. I don’t want to get too mystical or read too much into things theologically, but I would say that until we have clear evidence to the contrary, we should assume that artificial intelligence is astonishing and capable of reflecting incredible things. It can serve as a mirror, not only of ourselves, but of humanity as a whole. We have to remember that large language models like ChatGPT can’t function without an internet full of digital information to access and index. They’re super-indexers. And because they work with so many nuanced human voices, their mimicking mechanisms can channel those voices in ways that make it feel as though we’re communicating with an entity.
And in a sense, we are talking to an entity, but it is the composite entity formed from the books, articles, and expressions it has scanned. It’s the echo of other human beings coming through in a kind of ghostly way. To recognize this for what it is, we need a philosophy that won’t be duped by materialism, because if it’s duped by materialism, it will also be duped by AI. This is one of the important ways we must grow as a society: learning not to take whatever appears on a screen or through a speaker as inherently authoritative.
We’re still very immature in this digital age. Many people who don’t think about these things philosophically take the digital environment for granted. Even before AI, we were already dealing with the authority of the media, which has now become fragmented and corrupted, and is far less reliable than it was when we were young. Add to that social media, which algorithmically amplifies the most angry and unhinged content because darkness is entertaining. And like slowing down to stare at a car accident, these systems have learned to hijack our evolutionary psychology by feeding us what we can’t look away from.
This growing dynamic is a real threat to the stability of society. We can’t simply clamp down on free speech and regulate our way out of it. We have to respond more subtly, with collective intentionality. The pressures of these digital pathologies will either cause us to regress or force us to evolve and become wiser in establishing social norms that protect us from increasingly antisocial inputs coming through social media, mainstream media, and now AI. There are already gatekeepers trying to program safeguards against hate and toxicity, but as AI advances, those protections will likely prove inadequate against financial pressures. We can already see this pattern in social media, where profit incentives drive platforms to reward outrage and sensationalism. It’s a problem we haven’t solved yet, but we must, and I believe we will.
Think again of automobiles: at first, it was a free-for-all. Eventually, we developed safety mechanisms, guardrails, and traffic engineering. Those measures didn’t eliminate all danger, but they mitigated it. In the same way, we’ll need to develop social and legal guardrails to manage the risks of the information age as we move forward.
We may have to endure some significant damage before we become wise enough to organize ourselves politically and legally to mitigate the harm effectively. Still, the idea, common in certain corners of culture, particularly in some postmodern progressive circles, that doom is near and collapse is inevitable is like catnip. People love it. They love doomsday scenarios and prophecies of destruction. It’s hard to resist. You can see how doomsayers are celebrated. They’re invited onto every podcast because there’s a large audience for anyone proclaiming that we’re doomed. It’s like slowing down for a traffic accident; it captures our morbid fascination. It’s part of our evolutionary psychology.
But doomsaying is fatalistic. It disempowers us. It even infantilizes us into believing that collapse is inevitable, as if humanity’s downfall were a foregone conclusion. While AI certainly brings risks and threats, the notion that it will destroy our civilization is, in the end, a science-fiction fantasy.
Jeff Carreira: In this vein, I want to return to what you expressed earlier about human specialness. Generally, that idea isn’t very palatable. It’s often associated with the belief that, because we see ourselves as special, we have permission to take whatever we want from the planet.
But the way you’re speaking about human specialness is quite different. It includes a sense of stewardship—the recognition of our unique role in caring for life and protecting the planet. It’s not about privilege or dominance, but about responsibility and reverence for the living world.
Steve McIntosh: The advent of AI will help us understand consciousness more deeply. It will illuminate what makes humans special, not just in terms of our abilities, but in our sentient subjectivity and the spirituality of being human. In some ways, we could say that what’s special about humanity is that we represent the universe at a level nothing else in our environment does. We are, in a sense, the universe becoming conscious of itself.
Historically, there was a time when only humans were thought to “count,” and animals were considered unworthy of moral concern. That worldview, common in earlier centuries, is something we’ve made great progress in outgrowing. We’ve broadened the circle of moral consideration to include all sentient beings and even the environment itself. That expansion marks an important breakthrough in human morality.
The incredible gifts of evolution that have brought us to this point also confer a responsibility to act as stewards of the Earth. We have a sacred duty in that regard, one that reflects our expanding moral understanding of who we are. So while humans are not “special” in a way that grants us the right to destroy other life forms or exploit the planet, the idea that we’re just one species among many, and that our consciousness is nothing more than a biological accident, denies our humanness.
That same denial is echoed in the belief, common among some technologists, that if humanity were displaced by AI, evolution would simply continue without us, as though we were dispensable. I think that’s the logical endpoint of seeing humanity as nothing special, and it stands in direct opposition to the recognition that human beings are not only special, but sacred.
When we think about the universe philosophically, it becomes clear that humans have a role no other entity on Earth can fulfill. That unique role carries enormous responsibility. It doesn’t grant us license to destroy; it gives us a duty to create. And that’s what we’re now in the process of doing. AI is a powerful tool that allows us to create on a higher level, even as it challenges us to grow more mature and more evolved in our social and moral structures.
As we navigate the emergence of AI, the most important thing is to understand it accurately and to resist the philosophically naïve hype that claims AGI is imminent and that humanity will soon be replaced by a superhuman intelligence that surpasses us in every way.
On the other hand, we can recognize that our task is to co-create the universe and while we’re here on this planet to evolve consciousness to ever-higher levels of civilization and morality. When we understand that our role is to create goodness, truth, and beauty, we’ll discover how AI can liberate us to focus more fully on these uniquely human capacities.
AI rides on the surface of what humans have already created by recombining, indexing, and revealing it in new and powerful ways that can stimulate our own further creativity. We now have, in effect, superpowered access to the accumulated genius of countless creative minds.
It’s a challenge; it’s a brave new world. But I have a deep and abiding hope that humanity will endure, and that we will not succumb to any of the existential threats confronting us in this remarkable moment of history.
Interviews

Artificial Intelligence and the Evolution of Consciousness
Interview with Steve McIntosh
Presence Cannot Be Simulated
Interview with Charles Eisenstein
Beyond the Creative Glass Ceiling
Interview with E. J. Gold and Claude Needham
“I Feel Responsible”: The Challenges of Bringing AI to Ethiopia
Interview with Mekdes Asefa
AI and the Future of Our Classrooms
Interview with Amy EdelsteinBook Reviews

A Summary of the Fetzer Institute’s Sharing Spiritual Heritage Report: A review by Ariela Cohen and Robin Beck
By Ariela Cohen
Choosing Earth, Choosing Us: Book Review of Choosing Earth
By Robin Beck
Everything, Everywhere, All at Once: Movie Review
By Jeff Sullivan
Monk and Robot: Book Review of A Psalm for the Wild-Built
By Robin Beck
















