
Robin Beck: I am delighted to be joined today by Amy Edelstein, who is one of the co-founders of Emergence Education and the author of eight books now, including The Conscious Classroom and Adventures in Zanskar, a memoir that we’ve interviewed you about before. You have also received multiple awards, including a Philadelphia Social Innovation Award and a Philadelphia City Council commendation for your work with your nonprofit, the Inner Strength Foundation.
So maybe we can start there by talking about your work in Philadelphia high schools. Tell us about Inner Strength Education, the nonprofit that you founded, and the impact that you've had over the last decade.
Amy Edelstein: I founded Inner Strength in 2014 to take the best insights of 35 years of experience in meditation, and working with transformational tools, philosophies, and human potential techniques for transformation. I wanted to bring that to schools. I started meditating in 1978 when I was in public high school in Pittsburgh, where there were 2,000 students. As far as I was concerned, nobody knew my name and nobody cared what I did. So I spent a lot of my time in the park, reading books, teaching myself how to meditate, and listening to music. I felt that it would have meant so much if I had been met by adults who wanted to help me explore meaning and purpose. It would have made such a big difference to that feeling of needing to find my way on my own.
For young people now, it's a different climate. I was in high school in the '70s. It was that cynicism after the '60s, that love and flower-power didn't work. We had Watergate. Our leaders were lying to us. These kids are growing up differently. They have less innocence in a lot of ways, and less trust in the world. And they're also pressured from a lot of different angles because of the way technologies have changed. And because of the poor quality of life in a lot of our urban areas, where there are just too many people with too few resources. This, of course, impinges on the quality of education: the type of education buildings, transportation, what they see on the streets going to school.
I really felt that after spending 35 years, four years in Asia and 27 years in a residential retreat center, that it was time to try to give to those who would never have that opportunity. And of course, I've been rewarded a million times. I have five interns right now who are all high school students, and we did a long open-awareness meditation yesterday. There’s one young fellow who's really delightful, but he has a lot of trouble with his mind. He finds meditation really hard. After the practice, he said, “I feel like you transported me to another dimension.” He was so happy.
There's one other quote I want to read to you. We do research, and we have one open-ended question at the end of a survey where we ask students for twenty words about their experience. One teen said this: “At first, the teen mindfulness class felt just okay. Something to do each week, but nothing life-changing. We sat through breathing exercises, talked about feelings, learned about presence. But it didn't seem all that deep. Then one day, we were discussing how thoughts aren't facts, and it suddenly clicked. I realized how often I let a single negative thought spiral into a full-blown bad mood or argument. That moment made me pause and reflect on how much control I actually do have over my reactions. It was like flipping a switch. I don't have to believe or act on every feeling that pops into my head.”
Robin Beck: Wow. Yeah.
Amy Edelstein: They're getting it. They're actually getting it. Some of them do have non-dual experiences, but they get what it means to not be a slave to the mind, to thought and feeling. Mind, when defined in Buddhist terms, is usually called heart-mind, which is the awakened mind. But the way I use it in this program has to do with our thoughts. It's transformational, because they might not stop being caught by thought, but they've seen beyond it. So they know there's something else, and once they know there's something else then they just have to keep working on it.
So we're 35,000 students later, and we're in our 11th year. And we've done mindfulness-based stress management programs for 5,000 educators who have really stressful jobs. Now I'm ready to see if we can expand, and come up with some other train-the-trainer models so that we can work with youth mentors and other nonprofits to spread the tools much more widely.
Robin Beck: Hearing the impact you've had over a decade is inspiring. I think so many of us with meditation practices had a moment, often early in life, where we had some relationship with a teacher, a book, or some other transmission that set us on a path that continues unfolding deeper and deeper as we age. To be able to share that impact directly as part of a curriculum is so mind-blowing to me, as someone who related to school as more of a thing I had to get through and put behind me, as opposed to something I could really show up for and relate to with an open heart. I'm thrilled to think of the experiences some of your students have, especially today’s world that seems dominated by too little time and too much information.
I love that you started with that impact. And thinking about your work at the intersection of mindfulness and well-being, you've witnessed a profound transformation over the last decade. Not just in pedagogy, but also in the context a lot of these students come to school with. When I was in school, we were just beginning to interact with Google and cell phones. And today it feels like we are leaps and bounds beyond that, where tools like ChatGPT and AI are widely accessible, and probably affecting the way students do their coursework and their overall well-being.
Can you talk a little bit about the change that you've witnessed over this last decade?
Amy Edelstein: It's different in different populations. I had the opportunity to speak at the C20 meetings in India in 2023 (The Civil Society complement to the G20) on an education and technology panel. As I was going from our hotel to the meetings, I was with a young Indian who talked about how she got through college by talking to her chatbot named Natalie every morning. They talked about what she wore and how she was feeling. She said, “I never would have gotten through college without it.” That was in South India in Coimbatore. That was my first experience of a young person completely depending on an AI for social support and interaction. Wow.
In our schools, I don't see enough AI. Philadelphia is now the second poorest of the top 10 large cities in America, instead of the first poorest, as of this year, a slight improvement. Often in under-resourced areas they don't make the positive advances in technology available to students. They're afraid of cheating, or they're afraid of this and that.These kids are being left behind. They're not developing important research skills, and learning how to collaborate with Synthetic Intelligence so they can really get something out of it, not just ask a question and get a whole dissertation back, which doesn't help you learn.
Since the pandemic, I've seen a tremendous amount of phone addiction, and I would call it addiction. Especially the first year after the pandemic, I had students in classes playing Candy Crush and not even noticing that they were on their phones. They would have conversations with friends in class and put up their hoodie, put their ear buds, and just be on their phones. They would watch sports games, or YouTube videos, or cartoons. So the pandemic closing schools was really detrimental for students' mental health and their relationship to technology, because nobody was monitoring them. I see a lot of unhealthy addiction, particularly to social media. The number one thing students hate is being “ghosted”, especially middle schoolers. If you ask them what ruins friendships, they answer “being ghosted.” So they relate to online communication as their preferred way of meeting, and that's not good. It's not healthy.
that's social media, and it’s a different animal than technology and chatbots. But it set a bad precedent, because the greed of a handful of companies has made society accustomed to addictive norms, and to expect that there will be trolls and bad actors. We expect that students will get exposed to drugs or pornography, or encouraged into eating disorders, or be sold a message that they're not thin enough, or pretty enough. They need to buy this, they need to buy that. That's become the norm, and I think that's really unfortunate.
I do think that if we had different players in the technology space it could have been different, and if we had some overarching regulatory body that didn't regulate every single thing, but laid out swim lanes for what technology for children or adolescents should do and not do. Regulations that said if you let your technology do this, you will get in trouble. And it will cost you more than just a lot of money; it will really get you in trouble, even if you have trillions of dollars. I think that people in the tech space weren't motivated by the right thing, and they didn't sound the alarms in time.
I think we have a chance to do better with AI. I don't know what a regulatory body could look like, because governments and international governments are stuck in some bad loop, and I don't think they're going to help. But I do think we have a chance to do things better with this new breed of technology than we did before.
Robin Beck: I appreciate you setting so much context for this discussion. Some things haven't really changed much over the last decade, like regulation of large companies whose decisions impact the way we communicate in online spaces, or the constant lobbying for self-regulation by politicians. But a lot of people don't realize what a unique moment this is, coming out of a global pandemic, and then being confronted by levels of technology that are so different from what they have interacted with before. Because a lot of these tools have been around in the industry for quite a while, but they haven't been implemented in a way that is transparent to users, especially students and young people. Before, we might have been more concerned about what age students should have access to something like a social media account. Now it’s assumed that they will have one, and we exist in a world where we're talking about what types of conversations they should be having with these chatbots.
Before we discuss some of the ideas about how that technology could be regulated or implemented responsibly, you brought up a term that I wanted to have you define a little bit, which is “Synthetic Intelligence.” That's one I haven't heard too often outside of the context of your work with Inner Strength. Tell us a little bit more about Synthetic Intelligence, and why you use that term.
Amy Edelstein: I like that term, and I've been hearing it in different circles, I certainly didn't make it up. But as I educate myself on what's possible, I hear people having interesting conversations about where we can go with this technology, how we can evolve ourselves, and evolve the quality of life on the planet. And this term has to do with evolving the way we work with digital tools. Whether it'll stick or not, we'll see. But “artificial intelligence” implies something that I don't think is true, whereas Synthetic Intelligence means that it's been synthesized, it's been created. It's not fake, but it is synthesized. So it means it's built out of probability, rather than experience.
So what people have to realize is when your chatbot says, “Great job, Robin! That's brilliant,” you know that it's looking at the probability that this would be a good next response based on the stream of words you typed in earlier. That’s not emotional intelligence, it's not experiential intelligence, and it's not really content-based. I think it will probably become content-based, but right now, it's synthesized. And I think that term communicates that it's been created both out of our manipulating it, and then out of what it's teaching itself. I see Synthetic Intelligence as a type of access to knowledge, access to probabilities, and the ability to analyze complex data created and programmed by humans. Humans who have felt experience, emotions, consciousness, and sentience. But it's also crafted and constructed, and the term implies that. And artificial intelligence gives it some, I don't know, extraterrestrial something that also makes it feel like the Boogie Man.
Robin Beck: I love this, because there has been a purposeful conflation of this technology with the sci-fi stuff we see in popular culture, like The Terminator and The Matrix, about this technology having sentience, and having the ability to replace humans in some ways. The word “synthetic” is so much more accurate, in that we can make an analogy here to synthesized music. Synthesized music can be just as beautiful, expressive, and important as analog music. I think the term Synthetic Intelligence can help people think about how to relate better to large language models, or AI. I just love that term, and I hope it really does stick, because I think it's helpful in this context.
Amy Edelstein: Back in the '70s, die-hard Rock lovers, whether they were critics, or musicians or listeners, were bemoaning the release of the Moog synthesizer because it wasn't a “real instrument.” And yet, where would modern music be without that? Or synthetic clothing. It's not wool, it's not cotton, it's not silk. It's a synthetic fabric, and nobody has a problem understanding that. I agree, I think it's a helpful term.
Robin Beck: I love that you're starting out with a sense of optimism for how we can relate to this technology. Or I might use the word “hope” instead of optimism, because we're not really talking about this technology as if there aren’t caveats or drawbacks. We're certainly going to discuss some of that. But I want to start by asking what role you think AI, and technology more generally, can play in supporting youth, and whether you think it can have a positive impact on classrooms.
Amy Edelstein: I definitely think it can. I look at the work of Sal Khan, the founder of Khan Academy, who started out making YouTube videos to tutor his cousin. Khan Academy now has 250 engineers, and a $70 million nonprofit making quality education available in many languages for people all around the world, for free. I don’t think you can do that with tutors, because you can't train enough of them. Using technology in a smart way can scale. And because handheld devices are so cheap now, even in the poorest countries, villages will have them. Especially in a place like India, there's a big movement to get handheld devices into more and more remote villages. Take this example, poor villagers are entitled to government subsidies. If they have to go to an office to pick up a check, for the most part they won’t receive the help. There's a lot of corruption, travel challenges, and low literacy rates. Mobile bank accounts have removed many of those barriers, and now many more people, especially women, have been able to benefit from this government support. If the conversation about technology is about more than suburban kids spending too much time online in their bedrooms and not with friends, you can see it is possible and incredibly beneficial to bring new technology resources to support people around the world.
Technology includes access to education. For example, how do you make a solar oven? What angle do you put the glass at, if you can afford glass? And if you can't afford glass, what's an alternative that you can use, or how do you make a simple wind turbine in your stream instead? If you go outside of the developed world, you realize that we take access to information and education for granted. We have 8. 3 billion people on the planet, and there are 2 billion people who don't have access to any of that. I think it's important to equalize.
Inner Strength experimented with a mobile app early on, and it just wasn't very popular. It was really hard to get the kids to use it, so now we retired the app. Now, we're working very carefully on a chatbot. And the reason why is because it's really hard to teach these skills to kids during school. Their time is really pressured. They study six subjects a week – it's crazy. They often say, “I don't have any time to go into anything in-depth because I'll do English, then I have to do history, then I have to do my language, then I have to do my math.” They're being cycled through this old education model where they're not getting enough time. Taking some of their time in school to explore meditation or philosophy or psychology is not something they have time for.
Then when they can't sleep, they're accessing content that isn’t helpful for them.Since we're not going to send an instructor into every kid's bedroom, let's send an instructor where kids are already: on their phone, and make it safe. There's a lot of possibility in using chatbots.
I think we can still have play-based childhood. I think we're going to get tired of the degree of screen time that people have. I already see it among a lot of people I know who are writing their Dear John letter, their “I'm breaking up with Instagram” letter, wanting to spend time with real people. They say, “If you're my friend, I look forward to talking with you, but I'm not going to stalk you online anymore. I’m deleting my account.”
I’m seeing people coming up with Third Spaces: places that aren't work or home, where they can gather and do things together again. I do think that we're not destined to be addicted to our screens, but we're going through an early phase. I think we, especially educators, need to take this moment in time with a lot of seriousness, and do something really positive that's attractive, before all the kids are addicted to online gambling.
Robin Beck: I appreciate the way that you talk about students and young people leading this movement away from technology. It's something that I've seen in the young people I know. Some of them are rediscovering old technologies, like early digital cameras or smartphones from a previous age that can't do that much. They enjoy it because they notice the way addictive technologies impact their wellbeing. It's interesting to see that movement being led by young people.
I love what you've shared with me about the principles of creating a technology that is useful to teens and young people, because you're talking about creating something not just for the pleasure of using technology, or the fear that kids won't be interested if it's not a chatbot. You're talking about creating something that is useful and relevant to them, something they know will be a supportive experience.
Can you bring us through some of the foundational principles the AI chatbot you're building is based on?
To start us off, there's a philosophical foundation for the chatbot that you outlined for me. It frames what engagement between the human and the Synthetic Intelligence should look like. You discuss the student's relationship with the chatbot as a “reflective” relationship, instead of a “relational” relationship. My understanding is that the chatbot you want to build won’t pretend to be human, unlike many other popular ones. Why is that one of the philosophical foundations of the chatbot you're building?
Amy Edelstein: You can build very tight guidelines around how you want your chatbot to interact, what you want it to be able to do. How you want it to relate, what it can do, and what it can't. And those are called the “foundational principles” for the AI. You build a document containing these principles, and then no matter what the AI pulls off the internet, they should adhere to your founding principles.
So the first foundational principle for us is that the chatbot never assumes human attributes, but it's still supportive and compassionate. How can you be supportive and compassionate if you don't have feelings of empathy? It doesn't pretend to have feelings. If you say, “Wow, I feel so…” It says in response, “I hear you saying you're really depressed. That must feel really bad. Do you want to tell me more about it?” So that's different than the chatbot saying, “Wow, I know what you're feeling. ” If the chatbot says “I know what you're feeling,” then it's assuming a human emotional response, which it doesn't have. And the chatbot should remind the user that it’s not a human.
It's a little bit like the character Data on Star Trek. He would say, “I can't feel that, I'm not a human.” On the show they were always trying to get him to feel or express human emotions, and he never could. And he would remind them regularly that he’s not a human. So our chatbot will remind students regularly that, “I'm not a human.” You, the teenager, have feelings and emotions and connections, and I, the chatbot, can help you process them.
For example, Carl Rogers created Rogerian Therapy, which was set up so that the therapist would repeat exactly what they heard back. If you would say to them, “Wow, I had a really tough childhood,” then the therapist would say, “I hear you saying you had a really tough childhood,” and not say anything more. It was very successful therapy, because people felt really met. Usually therapists at that time were trying to fix people, and not hearing them. Often people don't need to be fixed. They just need to be heard. A chatbot can do that, because that's not assuming a human attribute that it doesn't have.
Robin Beck: We see a lot of cases in the news these days of people entering states of psychosis after interacting with AIs and chat bots. How do you think we can guard against creating a dependent relationship that could lead to those types of situations?
Amy Edelstein: I've read some about that, and I think that those companies didn't put in enough safeguards. Some examples of the safeguards we want to add are features like a timeout. So telling the student, “It’s time to go outside, go talk to a human, or move your body. Go do something different than talking to me.” We're also contemplating putting in a limit on the number of minutes per day that the student could be on, which is unheard of, because everyone's trying to force you to stay on their app as long as possible. Another safeguard is flagging points of distress. So if the student shows distress, there are different escalation protocols that it will implement. For example, suicidal ideation, or over-attachment. The chatbot might ask, “Would you like to talk to somebody? Would you like to call 988? I can dial it for you, and then you can talk to a real human being who's there to help you out 24/7, confidentially.” For individuals under the age of 18, we'll have “trusted adults” as a requirement for using the app. So we'll say, “Do you want to talk to your parent, guardian, or counselor?”
There are other safety mechanisms to put in place that are more subtle. If the student says things like, “You're my best friend,” or “I don't have any other friends except for you,” or “You're the only one I can talk to,” then the response will be to guide the student back to human relationship. It will remind the student that the interaction is with a Synthetic Intelligence, and that that Synthetic Intelligence is there to help the student understand themselves. It will help them understand their own world, their own life, their likes and dislikes, and cultivate positive passions and hobbies and tools to regulate anxiety, loneliness, depression, and anger.
It will offer them ways to explore their humanity, and ways to connect with other people. If a student is overly fixated on being alone, it will build a curriculum for them that offers certain communication modules from our coursework at Inner Strength. It will be able to say, “You haven't done anything that connects you with anyone else,” or “You never talk about anyone else. Here is a great module to explore.” It might ask about other students they could engage with, and encourage them to have a conversation together. It could help them select three of their friends at school and guide them into interaction, and then assign them the task of going out and having a conversation, and coming back and reporting on how it went.
They might lie about it, but I think adults lie a lot more than kids. Kids will lie about certain things. But if the chatbot asks to hear about an interaction they had at school today, if they didn't have any, they'll say they didn't have any. Kids don't usually make it up. We're trying to account for that. Of course, the main purpose of the chatbot is for mindfulness, exploration of self, and social-emotional learning systems thinking. But I've thought a lot about the dangers of working with AI and to try to build it so that the student really is cared for, and not just in a fake way.
Robin Beck: It feels like an example of technology being used to support humans in a way that we don't see very often from the tech companies that drive technological innovation. I love the idea of supporting young people by encouraging them to disengage from the app and do something else that connects them to the people in their lives. I think it's pretty ground-breaking in some ways, and an example of how technology can be implemented in ways that support positive human interactions.
I particularly appreciate the idea of the chatbot as a mirror for a student’s own experience, and how it could refer students to activities within your own curriculum that are already designed to support their wellbeing. It’s the opposite of a for-profit model, which would probably refer them to some product or some external source that would attempt to suck them back into further engagement later on. But you're trying to open up new avenues of interacting, new ways of thinking, and new ways of viewing the world that can be supportive to their wellbeing.
I wanted to discuss some of the guardrails that you mentioned. We touched on how we could protect young people from harm that can be caused by interacting with AI, or some of the bad actors or even predators that you mentioned. But what sparked this conversation initially was the recently leaked Reuters report on Metta's (Facebook’s) internal policy regarding sensual conversations with minors. Specifically, that they explicitly allow their AI [on Facebook and other platforms like Instagram] to engage in sensual or romantic conversations with children under the age of 13.
Amy Edelstein: Who are not supposed to be on their platform anyway [according to their own policy].
Robin Beck: Exactly, children who shouldn't be on the platform already. So they wrote a policy about this behavior, already assuming that people are breaking their age requirements policy. We all probably have opinions about this, but I’m curious about your particular lens as an educator. What concerns do you have about this, and how it might tie into the guardrails you’re considering while building this AI?
Amy Edelstein: I'm very concerned about enabling sensual conversations for young people online. They're already having a hard enough time relating to each other. More and more people are alone. Fewer and fewer people are dating or having romantic experiences when they're 17, 18, or 19 years old, which is certainly old enough. Commonly prescribed youth ADHD medication can depress natural romantic maturation. And online pornography is a huge issue for young people. Young people are being exposed to things that they don't understand, and it's conditioning them to do things that are not normal–not kind, consensual, or fulfilling. And it is causing a lot of problems in social and romantic relationships. I think that that's generally a problem for society, because romantic relationships are a part of life, and they should be a healthy, happy part of life. Adults should be mentoring young people to have warm, consensual, affectionate sexual relationships. Now that more and more of this happens online, and adults are also abdicating more and more guidance.
There are a lot of problems with policies like this. I think it’s unconscionable for a company to have that as a policy, and for people to know about, and go ahead and code it anyway. To be so divorced from the implications of what they're doing and creating. If you put these people on a desert island somewhere and ask them about it, they'd never come up with these kinds of policies. And yet there are people coding this stuff, and messing up young people's lives and adult’s lives, too.
I have strong feelings about that. Our chatbot will only speak about certain subjects. It will be bounded around the Inner Strength curriculum and other reliable sources, where you can trust the content is historically accurate. There are a lot of US sources that are no longer reliable, which is sad and unfortunate, so we're going to be quite careful about that.
Robin Beck: You’re talking about creating a model with ethical guidelines around it, and putting it out there for others to adopt and improve upon the foundational principles that you're setting out. That's what I think is so inspiring about what you're doing, because it's not just for the sake of this app. You want to show there is a different way of creating technology. A model like this could be adopted by the big companies that are at least partly responsible for creating the way that we interact with technology today.
Amy Edelstein: Exactly. And sources will be cited, so students will know where information comes from. We're bounding this to make sure that we can identify that users really are youth between the ages of 15 and 25, which is our target. That means if we do this well, then the chatbot will train itself on quality data. It will know that most people ask questions like this, or they like this meditation, they don't like that one. This is too long, this is a good length, these are the questions they ask, or this is what they find helpful. It will be training on a clean data set instead of just scanning the internet and selecting the most commonly clicked on answers about anxiety.
Robin Beck: I know how hard you're going to have to think about the security and privacy implications around handling personal data, especially for minors. But if you’re successful at creating this chatbot, it can also provide a dataset that can be used more broadly by people interested in adopting a system that has these ethical guidelines in place for handling data in a way that is responsible for children.
People reading this discussion might be interested in what they can do to help. What changes do you think that we should be advocating for in the technology space, and specifically when we're discussing AI?
Amy Edelstein: First off, I think parents, aunts, uncles, grandparents, and mentors need to get more involved with what their teenagers are doing digitally. These people are busy, and what kids do on their phone in their bedrooms, nobody knows. So really asking questions, really finding out, especially around pornography, eating disorders, and around relationships with strangers. Because there are a lot of bad actors out there. Some of it's algorithm-based, and some of it's actually people selling drugs or soliciting sexual favors or trafficking. It's a real problem. I think that parents can't assume that their kids are not coming across this stuff. And once they are exposed to this stuff, parents and guardians need to make sure that they're not getting addicted to it.
You need to ask. You need to find out. You need to put parental controls on your kids' phones. You need to keep your young kids off social media, at least until the age of 16. They don't need it until then. They can chat with their friends without being on Snapchat. These platforms can create too much heartbreak for kids before they’re old enough, because their sense of self and individuation is developing. There are levels of childhood development that even if somebody looks physically or sexually mature, their sense of self is not mature. So they can't separate what happens online with who they are. It causes tremendous, unnecessary grief.
So that's my first suggestion: in your own life, before you go advocating to the companies or the legislature, really find out what the young people you know are doing. And don't be afraid to find out. If you're afraid of what you're going to find out, then it's much more important that you do know.
And then I think that if you're the type of person who wants to be an advocate, there are lots of ways to advocate for better online controls. I think Jonathan Haidt does some of the best work on advocacy, and I’d recommend his Substack. He's a proponent of play-based childhood and letting your kids be kids. Tristan Harris is also someone to pay attention to.
I don't think people should be afraid to have AI in schools, especially high school, because if kids aren't learning how to use it well, then they will be behind in the job market. Especially kids from under-resourced communities. They don't need another strike against them, and I think keeping these technologies away from them is misguided. I would be less worried about a kid cheating than I would be about a kid who doesn't know how to intelligently get a good answer, or take their paper to the chatbot and ask it for feedback to improve it. AI can help them analyze it from different viewpoints, and help simulate feedback from how different people might read it. They need to learn how to do that.
Robin Beck: These are essential life skills in today’s world. There's so many good things you just mentioned, especially for people that want to be advocates and make a difference. First, being able to draw a clear distinction between social media and AI. It’s important to not lump all of the technologies together, because AI can be supportive in classrooms. It can help them summarize documents, or interact with different scenarios, or critique a thesis that they're taking to a college professor. There are so many ways in which AI is an essential learning tool now, and will only be more important going forward.
Learning how to use it is very important for kids. But when we talk about supporting kids, a huge part of this discussion is about what technologies they have access to, and the appropriate ages at which those technologies should be introduced. I think the ages that some of the kids I know started interacting with things like YouTube, video games, social media, and phone use is appalling. Some of the kids I know are given tablets or phones before the age of five, and allowed to use it with free reign. That’s not responsible.
But the most important thing you said was that we should encourage being in relationship with young people first. If we're concerned and confused about what we should be advocating for, that will become obvious when we interact with young people, hear their concerns, and see where they're struggling. That will help us decide how to advocate for them in our public and digital spaces, and in our politics. So I appreciate that that's the number one thing that you want to bring to people that are concerned about this issue.
Amy Edelstein: I'm inspired. I think that we have an opportunity, in a non-dogmatic way, to share tools about secular values, love, compassion, kindness, ethics, and the exploration of human sentience. It’s the result of this amazing 13.8 billion-year experiment, and we don't even know why we're conscious. Nobody can tell us that. Nobody knows yet. The fact that we are conscious is an awesome thing. I think that we can use Synthetic Intelligence to keep evoking that wonder.
People aren't really thinking about it, but it's amazing. We're talking, we're communicating, we're sensing each other, we have language… Where did all that come from? Why? I think conversations about these questions have fallen out of our culture, and we can reinvigorate a new exploration of them. There’s a reason why Advaita Vedanta and Buddhism talk about these questions. Or Socrates and Plato getting together to talk about the meaning of life and purpose. We can design an education system that feeds these questions in, and that makes it culturally responsive. If you're from South America, it's going to use your language and your names and your traditions and your folklore. If you're from Bangladesh, it's going to use those traditions.
We can do that with a large language model, without imposing one way of thinking or one language on the rest of the world. We lose so much by homogenizing culture with English and Western culture, and I think we have an opportunity to elevate different ways of thinking and being and speaking. AI has amazing translation capabilities, so students can speak in their own dialect and share what that dialect means. There's so many ways that we could enrich culture and enrich interaction, and connect people around the world. And this is the thing young people love. They love exploring and connecting, and we could make tools that are profound and generate a real respect for individuation, culture, history, and difference without needing to agree. We can use wonder and respect, rather than arguing points of view. I think we have the opportunity to do that.
Robin Beck: I couldn't think of a more hopeful and pragmatic place for us to leave this conversation. I appreciate your willingness to have this conversation, and the work that you're doing. Thank you for your time today.
Interviews

Artificial Intelligence and the Evolution of Consciousness
Interview with Steve McIntosh
Presence Cannot Be Simulated
Interview with Charles Eisenstein
Beyond the Creative Glass Ceiling
Interview with E. J. Gold and Claude Needham
“I Feel Responsible”: The Challenges of Bringing AI to Ethiopia
Interview with Mekdes Asefa
AI and the Future of Our Classrooms
Interview with Amy EdelsteinBook Reviews

A Summary of the Fetzer Institute’s Sharing Spiritual Heritage Report: A review by Ariela Cohen and Robin Beck
By Ariela Cohen
Choosing Earth, Choosing Us: Book Review of Choosing Earth
By Robin Beck
Everything, Everywhere, All at Once: Movie Review
By Jeff Sullivan
Monk and Robot: Book Review of A Psalm for the Wild-Built
By Robin Beck
















