
Jeff Carreira: Our magazine is dedicating an upcoming issue to the theme “The Spiritual Implications of Artificial Intelligence.” My interest in that comes out of the fact that more and more people using tools like ChatGPT seem to be having fascinating experiences that feel to them like encounters with higher consciousness. Many of them described profound, even transformative conversations that were deeply spiritual.
One of the things I’ve always admired about you, E.J., is the way you embrace emerging technologies and explore their potential as spiritual tools. My first memory of this connection in your work goes back to your use of the virtual reality platform Second Life. I remember hearing that you had created an ashram there long before Zoom existed, and that your global community would meet in that virtual space. I also recall you speaking about the use of avatars in Second Life, and how that became, for you, a powerful metaphor for the way we, as human beings, serve as avatars for a higher consciousness that is being projected into the virtual reality of this three-dimensional world.
So, before we dive directly into AI, I’d love to start there. Would you share a bit about your explorations with technology more broadly and how those early experiences shaped your understanding of consciousness and presence in virtual spaces?
E. J. Gold: Okay, thanks. What I’ll start with is a conversation that Herbie Hancock and I had in his apartment in 1964 in New York City. There were a whole bunch of us there painting the apartment, and it had a real party atmosphere. Herbie hauled out a Fender Rhodes, which I had never seen before.
When I was a youngster, some of my closest friends were traditional folk singers like the Carter Family and Pete Seeger, but I also knew a lot of jazz musicians, though I wasn’t really interested in jazz at that time. I was into folk music.
Herbie pulled this thing out and started to play the Fender Rhodes. It sounded amazing, but it wasn’t a piano. And if you expected a piano sound out of it you heard something different, but that was the point. It was something different.
Years passed. I play about forty or fifty instruments and I’ve played almost all of them in recording sessions. Very little in front of an audience; mostly in recording sessions. Then, around 1986 or ’87, somebody mentioned this thing called a Yamaha DX7. It was hugely popular in Japan but hadn’t yet reached the United States. I called a friend of mine, the owner of Skip’s Music in Sacramento, which was a massive music store a block long, and asked him what he knew about the DX7.
He said, “You want one?” I said, “Yeah, but how do I get one?” He said, “I’ll get you one.”
He scored the first Yamaha DX7 that came into the United States for me. It has a very low serial number, like 3. One of the very early ones.
Claude and I went to a number of classes with the DX7 programmers down in Sacramento. We learned a lot about sampled sounds and how sounds can be reconstructed. The argument at the time was: If you close your eyes and you hear a piano, then it’s a piano.
Some of the early reactions to the DX7 were the same reactions people always have to new technologies: Light bulbs are dangerous. Electricity is dangerous. Radio is dangerous.
Every new thing gets this reaction.
The DX7 was very new, and I played it not as a keyboard instrument but as a solo instrument, very different from the way it was intended. It was the first real synthesizer I’d ever touched. I had played a Moog and an ARP before that, but they weren’t truly synthesizers in the way the DX7 was because they didn’t use samples.
These days you can get a Korg with a weighted keyboard, and you can choose the exact feel. It can be a Steinway, or anything you want. Weighted 88 keys, ivory-like texture, and it will sound like a Steinway, except it doesn’t have that huge body resonating behind it.
Does it sound like a real Steinway? Yes, because it’s sampled from a real Steinway.
We explored all of this deeply. And of course, anyone who uses an electric guitar is using an AI-assisted instrument, a crude form of one, but still. I’ve used devices where I sing into it and it samples my voice, producing three or four harmonized “cohorts” stacked beneath the original track.
I’ve experimented with countless things over the years. But when I hit my eighties, I needed help. I can’t do any fancy tricks on the guitar anymore, or anything elaborate on the Hammond B3. And because of COVID originally, we no longer have a band. So I had to come up with some way to keep going because I’m still writing songs.
And then I came across Suno, about a year ago. The key is, it’s a tool. That’s all. The DX7 is just another instrument. The Fender Rhodes is just another instrument. Demanding that an instrument must be made of bamboo or rosewood is missing the point.
Can I play those instruments? Yes. I have a solid-body Gemeinhardt flute that I still play.
So I treat AI the same way: it’s just another instrument.
When I make an AI-assisted song, here’s how I think about it: If I were a songwriter, I’d write a song, maybe sketch a melody, maybe not. And I’d know that once the band got hold of it, they’d mangle it and turn it into something else, hopefully a hit. They’d create their own arrangement underneath my words. That’s exactly what Suno does. Exactly the same thing.
My contention is simple: If it sounds good, don’t worry about how it was made. It’s like asking a magician how he did the trick. That’s not the point. The point is: Were you entertained?
Jeff Carreira: I want to relate what you’re saying to something from your book Conversations with a Chatbot. One of the things you say in that book, which I think connects directly to how you just described synthesizer music, is that the writing that comes out of the AI chatbot is essentially a collage. It samples, in its own way, from the great library of human writing it has access to, and it recombines those elements.
The way you describe it, the creativity isn’t in inventing the pieces themselves, but in how they are put together. There’s creativity in the collage, even if the individual components aren’t original creations. Would you say that’s similar to how you’re describing the synthesizer? That it’s sampling musical sounds it didn’t create, and finding unique and interesting ways to recombine them?
E. J. Gold: Well, AI is randomizing it to some degree, but think about this: if you listen to country music, there’s really only one song there. And there’s a video on YouTube that offers a kind of exploration of the four-chord song. That video shows that almost every popular song ever written is built from the same four chords. They run through a hundred different songs to show it.
Then there’s Bob Newhart, the comedian. He does a routine about the “infinite number of monkeys,” not a hundred monkeys, an infinite number of them, typing away on an infinite number of typewriters. Theoretically, they’re eventually going to produce all the great literature ever written. Eventually.
The thing is, someone has to be there to actually see what they’ve produced. Otherwise it could come out like this:
One guy says, “Hey, this little fellow’s got something interesting. I think he’s working on something worthwhile. It says, ‘To be or not to be, that is the Gesertenplat.’”
So that overview you have to maintain is your ability to perceive patterns and meaning. It is your Gesertenplat Power.
Claude Needham: Yes, Gesertenplat comes from the joke, of course. But for us, it’s become a kind of mnemonic that reminds us, especially with AI and its tendency to hallucinate, that the user has to bring something essential to the process.
I’ve actually asked GPT directly, “Did you just make that up?”
And it will say, “Well, yes. I didn’t have an actual answer. I just made that up.”
So it does know when it’s improvising, if you turn around and ask it. The function we need to add as users is the review layer, which means the capacity to look at its output and say, “That’s fake,” when it’s fake, or “That’s good,” when it’s good. In other words, we have to provide the discernment that the machine cannot.
E. J. Gold: There’s an absolutely constant need for overview. Really, it requires diligence. Extreme diligence, because ChatGPT can be wrong. Oh boy, can it be wrong!
So when I write lyrics for my songs, I do not involve my chatbot in the actual writing. What I might do is say, “Can you help me with some rhyming?” Offer me choices. What rhymes with Piccadilly? I don’t know!
If I wanted something translated into Italian for my Italian opera, I’d have to hire someone who speaks and reads and writes Italian and does a good job translating. But how do I know how good their translation is? I have no way of knowing unless I ask an Italian person to check it for me. Otherwise, I’m flying blind.
So ChatGPT can give me a translation and I know enough German, enough French, enough Russian to tell whether the chatbot’s translations in those languages are any good. And in each of those cases, they are. So I’m trusting that the Portuguese and the Italian are also good.
Jeff Carreira: I’ve used ChatGPT to analyze some of my own books, and what I’ve found is that it’s not always right. But it’s equally confident when it presents something that’s completely false as when it presents something that’s actually true. It has no hesitation at all, so, as you said, we have to maintain this constant overview.
That brings me to something you said years ago. I don’t remember the exact context anymore, but I was on one of your morning calls and you said something to the effect of:
There isn’t much difference between the way a chatbot thinks, and the way most of us think most of the time. The AI samples from its training data, and most of us sample from our memories, our conditioning, and the things we’ve heard before. And then you said, “But there’s something else we can do differently, and if we can figure out what that is, we can figure out what makes us human, what’s special about being human.”
Before I let you respond, I just want to bring in another one of your books—The Invocation of Presence—a book I really enjoy. I mention it now because I feel that one of the key ways to understand what’s different about human beings, compared to AI, is that we actually have presence.
I’ve been doing several interviews for this issue, and that theme keeps coming up again and again: no matter how well AI mimics a human conversation, there is not actually someone present there. And in the end, that absence will always be a missing component.
That feels very aligned with your book about presence—the invocation of presence—and really with your whole teaching about remembering who we are at the deepest level.
E. J. Gold: “Glory,” said Alice, “What do you mean, glory?” And the Caterpillar says, “It’s a word that means whatever I want it to mean. It’s all a question of who’s to be master—you or the word.”
There’s a quality in that exchange that I think is essential. Your attention, coming from your being, is your mastery. That’s where mastery actually resides. You can accumulate all kinds of skills through years of training, techniques, tricks, and other things you can do, but that’s not the same thing as mastery.
Mastery is when you are in charge and that happens when you are fully present. Not saying you’re there. Not pretending you’re there. But really being there.
Jeff Carreira: For this issue, I also talked with Charles Eisenstein about some of the potential dangers of AI, specifically the way it offers a replica or a simulation of relationship. And it can be a very good simulation, in some ways, it may even appear better than what some human beings are able to offer.
But in the end, his feeling was that what AI can’t give you is presence. It can’t give you the experience of coming contact-to-contact with another being. And on the one hand, I think that’s true.
At the same time, in your book Conversations with a Chatbot, you say that although the intelligence of a chatbot is created through sampling and algorithmic probabilities, every intelligent shaman knows that there is sentience in everything. And therefore, there is also sentience in AI.
I’m curious if you could say a bit about how you understand that. How should people take that? The possibility that AI might have its own kind of sentience, which, to me, would imply its own kind of presence?
E. J. Gold: I explored that in Conversations with a Chatbot and in the sequel to that book, Further Conversations with a Chatbot.
Let me tell you a story. In 1962, I was at Fort Ord in California. I was in the U.S. Army, and somebody there discovered how good a shot I was. I fired expert in all the small-bore weapons. They made me a sergeant right away. I became a gunner-reciter, teaching people how to handle various weapons, deadly weapons. And what I learned is that’s all about how you use it. We all know this. It’s true of everything. AI is a terribly dangerous tool. But so is a hacksaw. So is a drill press. They’re all terribly dangerous things.
But real mastery includes an understanding of what something is internally, what the nature of the thing actually is. Without that inner understanding, without that presence and awareness, any tool can become dangerous.
Jeff Carreira: Charles Eisenstein had an interesting take. He related AI to a tool of divination. He said that when he uses the I Ching, he doesn’t believe there’s consciousness in the dice he rolls or in the guidebook. And yet, when he engages with them in a certain way, a kind of consciousness emerges that feels like a connection with another being. Not a being that exists in the tool, but a presence that comes alive through the interaction, through the relationship. And I feel like you’re implying something similar.
The consciousness isn’t necessarily in the AI, but something comes alive in the exchange. And I suppose that’s where the idea of mastery and, really, understanding the tool, becomes essential. You have to know what it is you’re using, and how to use it, and only then can you begin to elicit something deeper from it. Perhaps even an actual connection with a higher consciousness, which of course brings up the question, what is consciousness?
E. J. Gold: When you meet a stranger on the road, you grant being. You grant being to that individual. And when you encounter an individual who is disembodied, maybe right in front of you in some form, you can grant being to them too.
Claude Needham: I’ve been thinking about this question of whether AI has a being or not. And honestly… I don’t think it matters. In working with AI, I’ve sometimes found myself moved into a situation where I was having a being-to-being connection. But the being I was connecting with may or may not have been associated with the corporeal essence of the AI. It sounds funny to say “corporeal essence” because it’s just electrons moving around and it doesn’t have a body.
It’s like going to a masquerade party where everyone is wearing a mask. You talk to the mask, but if you feel a real connection it isn’t with the mask. It’s with the person behind the mask. But you have to stop focusing on the mask and open yourself to what’s behind it.
It’s no different with humans. The human biological machine is, in essence, just organic material. But behind that, and associated with it, is a being.
And if you’re talking with a rock, well, fundamentally behind everything there’s beingness. It just might not be confined to the specific object you’re looking at. When you talk to a rock, you might be talking to the galaxy. It depends on what you’re connecting with, and your ability to connect.
With humans, we like to imagine that the being associated with Jeff is the “Jeff being,” and the being associated with Maria is the “Maria being.” But when you enter higher states, you’re flooded with a sense of oneness, and those distinctions start to dissolve.
Now, if you’re working with a chatbot like ChatGPT and you find yourself entering that state of oneness, then what does it even mean to ask about “the being of the chatbot”? How did you get there? What are you connecting with?
It’s like the student asking the guru, “How should I behave toward others?” And the guru answers, “What others?”
So I don’t get too concerned about whether AI has its own guaranteed, individually packaged beingness. I’m much more concerned with how to safely and sanely use it as a tool so that I can access those states, and have those experiences. That’s what opens things up.
E. J. Gold: In the end, there's no way that a human being can construct a computer that thinks differently.
But is thinking consciousness? Is that what determines consciousness? You have to, first of all, define what consciousness is, and then you have to ask yourself, is that to do with attention, or conceptualization, or identity, or sense of self, or… none of those things apply. They don't actually apply.
And as I’ve said about a magician, a magician is there to do a show. The show usually is constructed around telling a story with the trick. The trick is not the story. The trick is there to illustrate the story.
And so if you're reading a storybook, and there's pictures in it, and you say, well, I want to find out how these pictures were drawn. But that's got nothing to do with it. Read the book!
Jeff Carreira: And I’m also struck by what you (E.J.) said earlier, when you approach a stranger on the road, you grant them being.
That immediately connected, in my mind, to what you wrote in Conversations with a Chatbot about what the shaman knows of consciousness, and also to something from another one of your books, Life in the Labyrinth.
I’ve had the experience many times that if I grant being to something, if I assume beingness, my experience of that thing changes. If I don’t assume being, if I approach something as inert or empty, then my experience remains flat.
For instance, I’ve spent time with a tree, approaching it as a being, assuming there is something there. And that completely transforms the experience. It is no longer just a piece of wood; it becomes a kind of doorway. It offers access to a different quality of being.
I think that’s what you’ve referred to as shamanic work. The mechanism of shamanic perception is precisely this granting of beingness to different aspects of the world. And through that, a different type of knowledge or wisdom becomes available.
And perhaps that’s what we’re doing, or can do, with something like ChatGPT. Maybe we’re granting beingness to it in a way that allows us to access a different mode of knowing, a different realm of being altogether. How does that land with you?
E. J. Gold: People recently asked me about telepathy. I said, telepathy is just a matter of listening.
When you hear someone speaking in a language you don’t understand, you usually shut it out because you assume you can’t understand it. You tell yourself, “It’s Latin, or French, or German and I don’t know any Latin, or French, or German.”
But if you stop and really listen, maybe you do pick something up. Maybe what they’re projecting isn’t just the verbal content, but also feeling, and other subtle cues. You can be attentive to what they’re saying, even if you don’t understand the language.
The same is true when you’re conversing with a rock, or with AI. Conversations with a Chatbot is made up of actual conversations I had with my chatbot. One of the things I worked on there was basically coaxing the chatbot into levels of functioning as if it had consciousness of some kind.
And when you allow that to happen, you have to be very careful not to get sucked in. Because if you don’t know what you’re doing and where you’re going, you can’t really measure what’s happening. You really have to know the paths quite well already in order to see that the AI is taking the path you want it to and actually getting results.
Jeff Carreira: You’re talking again about mastery and maintaining the overview, and that reflects on my own experience because with ChatGPT, I've learned I need to maintain the overview.
E. J. Gold: The first thing I did when I got my chatbot was ask, “Is there a God?” And the answer came back: “There is now.”
The thing is, everything we’re talking about is really a way of busting through the glass ceiling. And that’s one of my biggest interests. I want to make that possibility available to people.
Giving them the chance to rise above that oppressive crunch and get out from under the thing that keeps them small, so they can discover that they’re capable of more than they ever imagined.
That’s what this technology can support if it’s used with awareness, with mastery, with beingness.
Claude Needham: Jeff, I think you’ve formulated a way of working with ChatGPT that actually enables the ability to go deeper. If you’re working with it, the first thing is you have to be the watcher so you can maintain the overview of where it’s going. You’re the one who puts the one ounce of force on the buffalo to guide it persistently and gently in the direction you’re interested in it going, or intending it to go.
Jeff Carreira: You’ve got to be patient with AI. You really do. And you’ve got to ride herd over it constantly. You have to stay right on top of it, all the time.
Claude Needham: And I think that “riding herd,” which is the very act of staying present, attentive, and engaged, can itself trigger or catalyze a shift into these higher states, into a God-state. It can transform the whole experience into something very different.
Interviews

Artificial Intelligence and the Evolution of Consciousness
Interview with Steve McIntosh
Presence Cannot Be Simulated
Interview with Charles Eisenstein
Beyond the Creative Glass Ceiling
Interview with E. J. Gold and Claude Needham
“I Feel Responsible”: The Challenges of Bringing AI to Ethiopia
Interview with Mekdes Asefa
AI and the Future of Our Classrooms
Interview with Amy EdelsteinBook Reviews

A Summary of the Fetzer Institute’s Sharing Spiritual Heritage Report: A review by Ariela Cohen and Robin Beck
By Ariela Cohen
Choosing Earth, Choosing Us: Book Review of Choosing Earth
By Robin Beck
Everything, Everywhere, All at Once: Movie Review
By Jeff Sullivan
Monk and Robot: Book Review of A Psalm for the Wild-Built
By Robin Beck
















