The Artist of
POSSIBILITY
Issue 24
Image
The Spiritual Implications of AI
Issue: 24 | December 15, 2025
Letter from Editor, Robin Beck

I’ll be honest, when our team asked me to put together this issue's Letter From The Editors, I felt a bit of dread. Because I work in tech for my day job, I have a lot of context, and a unique lens on our team to write this, but I have strong opinions about AI, so in that sense, I’m not a neutral observer. I’m an educator, user, and critic. But I know the concerns and emotions I feel about AI aren’t unique to me. Many of you, dear readers, have expressed your feelings about AI to us. And these shared concerns are big reasons why we, as the editors of this magazine, wanted to approach the topic of AI with curiosity, care, and a sense of responsibility: like it or not, we are all users of this technology now, whether or not you intend to be one. And because AI touches more aspects of our daily life than we often realize, talking about it can feel slippery and scary, and understanding the basics of it is becoming necessary for us, and the people we care about, to feel safe.

This issue is about AI, and our spiritual and philosophical relationship with it. We decided that, as editors, we aren’t here to praise or condemn the technology, but to recognize it as a tool. And new tools can disrupt our way of life, our livelihoods, and our privacy, while also offering tremendous potential for learning more about ourselves, individually and collectively.

Lots of you have shared your experiences of awe and wonder at the capabilities of AI. Many of you have told us that you’ve had incredible conversations with these tools about the nature of mind, spirit, consciousness, and God. We want to honor the depth of those interactions, and share some of our own experiences that have been equally impactful. So first, thank you for your inspiration; it’s why this issue came to life.

So what is AI?

Let’s start with the basics, because context is important. The term was original coined in 1956, and was defined as “the science and engineering of making intelligent machines". As computing evolved, the term artificial general intelligence (AGI) was coined to distinguish human-level intelligence, and discuss the possibility that machines might one day “surpass human capabilities across virtually all cognitive tasks.”

Until recently, this was the definition of AI that most of us were familiar with. In popular culture, movies like The Terminator and The Matrix capitalized on our cultural fear of machines. There’s this uncanny sense that we don’t really understand how technology, computers, and smartphones work anymore, and that these machines may one day seek to replace us, the inefficient, squishy, sometimes dumb creatures that we are.

Then, in 2022, something happened. The company OpenAI released a technology to the public they had been working on for a while, which came to be known as ChatGPT. This technology is an example of a large language model (LLM). It processes large amounts of text called “training data”, which is then used to answer the questions. Based on how it answered the question, the engineers then teach the model to train itself. It “learns” to correct its own mistakes, and (in theory) improves its answers each time it’s corrected.

Essentially, this technology is a form of pattern recognition. It finds patterns in text, and then when you ask it a question, it guesses the answer based on its training data and the way it has learned from being corrected in the past. Today there are a lot of models that aren’t trained on text, but on images or video content. You can now have a generative model spit out an entire fake video of a politician making remarks they never made, or predict when a weapon should be fired to inflict the maximum amount of pain on its victims, with minimal human oversight.

Yes, this is an oversimplification of how AI works. These models have grown so much that the engineers themselves often struggle to understand how they arrive at the answers they generate. But in principle, these systems are designed on a simple premise: guess the next word, or image or video frame, in the pattern.

And it’s really, really good at this. It has access to essentially all recorded human history, every piece of public (and a lot of non-public) data has been fed into these models. So these models know how to interact with us as we type or speak a question to a chatbot: guess the next word in the pattern that the person asking wants to hear. If the wind is blowing the right direction, the will probably be factually actually, but no guarantees. Regardless, the AI will match the style, cadence, and tone it infers that the question asker wants to hear.

Understanding this is key to framing this discussion about AI. The tech industry has enjoyed the purposeful conflation of their technology with AGI (artificial general intelligence) in order to game public sentiment, and create market value for their companies. They have capitalized on our confusion about what this technology is, and whether we should fear it. Yes, it’s important to know the extent of what this technology is capable of, and to speculate on whether it, or the technologies we create in the future, could ever become conscious. But to me, the intriguing part of this discussion is why we’re so interested in “human” intelligence anyways, especially when the more-than-human world contains vast examples of intelligence we have yet to explain, emulate, or comprehend.

I often look to James Bridle’s work to help tease apart the meaning of intelligence, which broadly seems to be defined as “what humans do”. He has shown that we seem to keep moving the goal post for intelligence over the decades, first from tool use (which gibbons can do) to feeling pain or emotion (which whales and dolphins do) to sentience, which hinges on whether the subject has a subjective, conscious experience. We’re very concerned with whether AI is intelligent and conscious, but we have yet to grant many creatures that seem to fit this definition (including other humans) the right to life that many of us enjoy under the law.

In this issue, you’ll hear from some fantastic thinkers on the subject of AI, and how we could form responsible spiritual relationships with it. Steve McIntosh shares his thoughts on AI and the nature of consciousness, and whether or not machines could ever be sentient. Charles Eisenstein reflects on the reason people are drawn to AI, and its attempts to simulate the presence we all crave. E. J. Gold and Claude Needham have a powerful conversation about AI as a musical partner, and as a creative and spiritual instrument. Mekdez Asefa shares an intimate perspective on the potential and the challenges of her work bringing AI to Ethiopia. And Amy Edelstein celebrates ten years of her work bringing mindfulness education to students in Philadelphia public schools, and her inspiration for a new kind of AI chatbot she’s building.

In addition to our featured interviews, we have contributions from members of The Mystery School.

This moment in history feels powerful, and potent. Never before have we had such vast, unrestricted access to information, synthesized for us into bite-sized soundbites at the push of a button. In addition to exploring our fascination with how AI works and what it can do, this moment is offering us the potential to reassess our relationship with technology, and how embedded it should be in our lives, and those that come after us.

I’m grateful that you’re joining us as we explore the topic of AI and how it’s impacting our relationship with spiritual exploration, insight, and education. Really, we’re encouraging ourselves, and you, dear readers, to explore this exciting, terrifying new technology with a healthy sense of skepticism, curiosity, and wonder. Wonder for the ability to explore it at all, and reverence for the intelligence that is driving this - all of this - this incomprehensible, vast, beautiful, and heartbreaking experience we call life, into yet another brave new world.

Our featured artist for this issue is E.J. Gold.

We hope you enjoy the issue.

Image
Editor
Jeff Carreira
Image
Editor
Chandra Luzi Edwards
Image
Editor
Robin Beck
Unable to locate Global Block : 31424