The artificial intelligence (AI) company Anthropic convened around 15 Christian leaders from Catholic and Protestant traditions, academia and business at its headquarters in San Francisco at the end of March for a two-day private summit.
Staff at the firm, which developed the chatbot Claude, sought guidance on how to shape the model’s moral and spiritual behaviour as it confronts increasingly complex and unpredictable ethical questions from users.
Participants who spoke to The Washington Post, the first outlet to report the meeting earlier this month, said they left convinced that Anthropic’s interest was genuine rather than performative.
Discussions reportedly covered how Claude should respond to people grieving the loss of loved ones, interact with those at risk of self-harm and address existential matters such as the AI’s possible “shutdown” or sense of its own mortality.
Participants also examined whether an AI system could meaningfully be described as a “child of God”.
The gathering, which included a private dinner with senior Anthropic researchers, forms part of the company’s broader efforts to embed ethical reasoning into its models.
Anthropic has long promoted “constitutional AI”, a framework designed to align systems with explicit principles of helpfulness and harmlessness.
One attendee, Father Brendan McGuire, a Catholic priest based in Silicon Valley who had previously contributed to the drafting of Claude’s guiding constitution, told reporters the company was “growing something that they don’t fully know what it’s going to turn out as”.
He added: “We’ve got to build ethical thinking into the machine so it’s able to adapt dynamically.”
Meghan Sullivan, a philosophy professor at the University of Notre Dame, said she was struck by the sincerity of the exercise. “A year ago, I would not have told you that Anthropic is a company that cares about religious ethics,” she remarked.
Another participant, Brian Patrick Green, director of technology ethics at Santa Clara University and a practising Catholic, described the central question as: “What does it mean to give someone a moral formation? How do we make sure that Claude behaves itself?”
Anthropic has not issued a public statement on the summit.
The company, founded by former OpenAI executives including Dario Amodei and valued at about $380 billion (€325 billion), has positioned itself as a leader in AI safety.
It has previously resisted certain military applications of its technology, citing ethical constraints, which lead to clashed with the US government.
Anthropic refused to remove safeguards that prevent its AI from being used for fully autonomous weapons and domestic mass surveillance.
The Pentagon responded by designating Anthropic a “supply-chain risk”, effectively blacklisting the company from government contracts.
The move to consult religious leaders marks an unusual step for a Silicon Valley firm, where ethical debates have traditionally been framed in secular, technical terms and where liberal and so-called “woke” values have appeared to be dominant in recent years.
As AI technology becomes more widely adopted, it has come under increasing scrutiny.
Notably, companies such as Google and OpenAI have faced criticism over allegations that their AI systems may have contributed to encouraging or enabling self-harm or suicidal behaviour in some users, leading to some court cases.
EU lawmakers have demanded that AI providers pay for their use of copyrighted European content, as they called for expanded rules to apply to generative AI. https://t.co/xfeSrmiCw2
— Brussels Signal (@brusselssignal) January 29, 2026