Can AI Really Support Missions? Some Nonprofits Are Betting It Can
From suicide prevention to frontline casework, AI is becoming a powerful partner in expanding nonprofit capacity.
By Stephanie Beasley
Senior Writer, The Chronicle of Philanthropy
Researchers know that some outward behaviors can signal a risk of suicide, but until now it has been very hard to identify the private warning signs. The nonprofit Stop Soldier Suicide is using AI to get a view into those behaviors. Its Black Box Project uses AI and other forensic tools to analyze data taken from devices, such as mobile phones or laptops, loaned by families of people who died by suicide, revealing patterns such as financial instability or poor sleep.
“What was not available anywhere was that private journey of folks from the time they took their own life and then backwards,” says the group’s CEO, Keith Hotle. “It tells a story we can’t really find anywhere else, and that will also provide us with clarity about how we develop strategies to try to intervene earlier.”
The Black Box Project is now expanding its research to other high‑risk groups, including construction workers, first responders, and young people. Future research will incorporate data from wearable devices, credit files, and clinical mental-health records, says Austin Grimes, the group’s chief product officer. The long‑term aim is to make the project’s insights available to other organizations working in suicide prevention, Grimes says.
“We started designing a system that would undersand when certain topics were too sensitive for an AI to respond to an individual.” – André Heller Pérache, Director of Signpost, an International Rescue Committee project
Most nonprofits are taking small steps by using AI to draft emails, grant applications, and reports, according to a recent report by the Center for Effective Philanthropy based on a survey of foundation and nonprofit leaders.
But others are using it to transform how they meet their missions. Their experiments reflect a broader shift from curiosity to practical use, even as leaders grapple with equity concerns, data-governance challenges, and the need to ensure humans remain central in the most sensitive work, experts say.
According to a Mission Partners–Chronicle of Philanthropy survey of nonprofit leaders, 57 percent of respondents strongly or somewhat agree that they are behind the curve on using AI. But that impression is not always correct, says Rachel Dzombak, who teaches about AI innovation and responsible adoption at Carnegie Mellon University. Most of these organizations are piloting interesting ideas, but they lag behind in implementing them because of a lack of financing, she says. Nonprofits will need support to help them move from running pilots to rigorously measuring and evaluating AI’s potential value to their organizations, Dzombak says.
“Some organizations have a lot more resources to play with than a nonprofit might, and that is one of the biggest areas of disconnect and why nonprofits have to be very strategic on what it is that they’re doing and how they’re approaching AI,” she says.
How to Pay for AI
As nonprofits race to incorporate AI into their work, a wave of new funding has come from foundations, the government, and corporations. In 2023, Stop Soldier Suicide won a $3 million federal prize. In 2022, Amazon Web Services gave the group $100,000 in unrestricted funding and $25,000 in computing credits, which helped offset the cost of AWS products.
The Patrick J. McGovern Foundation, which advocates for the use of AI and data science in social-impact work, is a major funder of AI development in the nonprofit world. Last year, the foundation provided $75.8 million in grants to groups building AI tools for public benefit.
Philanthropy plays a critical role in helping nonprofits fully benefit from AI’s potential and gain access to the high‑quality, interoperable data that AI systems require, says Nick Cain, vice president of strategy and innovation at the McGovern Foundation. Many groups lack the resources to, for example, combine housing information with 911-response data — to help them determine the frequency of emergency calls in certain areas and develop improved public-safety plans, he says. Funders also need to support the governance processes needed for data sharing, like determining who owns the data and how organizations should collaborate, Cain says.
“When I think about the role of philanthropy, it is to ensure that as many nonprofit institutions are well positioned to take advantage of AI’s incredible capabilities as possible. No matter how good the AI models get, they will always be better when they are paired with the data of local and lived experience.”
Nick Cain, Vice President of Strategy and Innovation, The Patrick J. McGovern Foundation
The Benefits of Trial and Error
Successfully integrating AI into an organization’s work requires a tolerance for risk that many nonprofits lack. In many cases, the best course of action is to start with relatively low-risk efforts that can free up resources, Dzombak says, and to avoid areas that might cause reputational damage if things go wrong.
The International Rescue Committee has long looked to technology to help advance its mission. It has an in-house team of researchers and tech experts that design and test new technologies. And it is setting its sights on AI. For example, it is developing a smartphone app to help health workers diagnose mpox in areas with limited medical resources. A $400,000 award from the McGovern Foundation is supporting its development of a crisis-response system that uses AI to provide resource lists and other information to displaced populations.
The IRC began exploring AI chatbots after the release of OpenAI’s ChatGPT in 2022, says André Heller Pérache, director of Signpost, an IRC project that launched in 2015 to provide multilingual information to communities affected by crises. As generative AI became more accessible, the research and tech team began testing an AI virtual assistant that helped Signpost staff handle 60 to 70 percent more information requests. Initially, AI responses were accurate about 70 percent of the time, prompting IRC teams to build a system with stronger safety checks. The new version is designed to recognize when questions are too sensitive for automation and require human support, according to Pérache.

“We started designing a system that would understand the limits of its own knowledge and that would understand when certain topics were too sensitive for an AI to respond to an individual,” he says. “The AI manages the everyday work, and the humans manage the work that requires their humanity and their competence and a human touch.”
Solutions for All
In Detroit, Samaritas, a human-services nonprofit that works throughout Michigan, is connecting tech start-ups with social-service staff to create AI tools that support caseworkers, says CEO Dave Morin.
When Morin was a board member before becoming CEO, he would often see missed opportunities to reduce the administrative burdens on staff members dealing with issues like foster care and affordable housing. Staff stuck in paperwork-heavy jobs can suffer from burnout, which increases turnover, he noted. With a degree in computer information systems and experience at tech start-ups, Morin naturally looked for tech fixes.
Samaritas has started to experiment with ways to use phones and tablets to lighten the load by performing tasks like transcribing difficult conversations with families and providing real-time prompts that help staff respond during court hearings and other high-pressure settings, Morin says. Involving nonprofit workers in developing new tools helps them see AI as supportive rather than a threat, he says.
“I think the greatest thing we can do is pilot a few simple opportunities to show people what is possible,” he says. “This is very scary for some workers who can tumble to a conclusion that this is all about eliminating their position when it’s just the opposite.”
Fostering that understanding is critical to the success of AI integration, added Kelli Dobner, Samaritas’s chief growth officer.
“We want to infuse AI into their every day to alleviate that workload so they’re spending more time with their families and community,” she says.