What I'd Ask Before Your Nonprofit Says Yes to AI

AI adoption is a change management problem. Rooted in culture, not just technology.

A finger pointing up, signaling 'excuse me!', and the title of the essay called 'What I'd Ask Before Your Nonprofit Says Yes to AI'

Your board chair is asking about AI. Your staff is already using it without telling you. Every vendor in your inbox has a "nonprofit AI solution" they'd love to demo. And someone in the organization has to decide what to green-light, what to block, and what to put off until there's more information.

That decision looks like a technology question. It's not. It's a change management problem, one that touches your mission, your staff culture, your donor relationships, and the trust your community has in you. People inside your organization will fall along a wide range of adoption profiles, from excited to hostile, and most of them have legitimate reasons for where they land.

I'm a marketing strategist with over a decade in regulated industries — healthcare, government, and nonprofit. One of my current contracts has me running the entire marketing operation for a pediatric pain education nonprofit, which means I've had to figure out where AI actually helps and where it creates new problems, on a budget and timeline that don't leave room for expensive mistakes. These are the questions I worked through myself, and the ones I'd want any nonprofit to sit with before making that call. Because the organizations that think this through will make better decisions than the ones that either rush in or opt out — and the communities they serve are the ones who feel the difference.

Table of Contents
  1. What can AI actually do?
  2. What should humans keep doing?
  3. What problems does AI create?
  4. Are you using AI to avoid hiring?
  5. Is your nonprofit ready for AI?
  6. Do you need an AI policy first?
  7. What does AI actually cost?
  8. Can AI write your grants?
  9. How do you evaluate AI vendors?
  10. Does prompting actually matter?
  11. What happens to your data?
  12. How do your donors feel about AI?
  13. Should you disclose AI use?
  14. Is AI aligned with your mission?
  15. What if you don't adopt AI?
  16. What if you decide not to use AI?
  17. How do you actually make this decision?

What can AI actually do?

↑ Table of Contents
The shift from 'AI drafts, you decide' to 'AI acts, you supervise' is happening now, not in some theoretical future.

Most people encounter AI through generative tools like ChatGPT, and that's a reasonable starting point. Generative AI is good at a specific kind of work: tasks that involve volume, repetition, and pattern. First drafts, research synthesis, summarization, data formatting, personalization at scale, and translating content between formats or audiences.

But AI is broader than text generation. Crisis Text Line uses machine learning to triage incoming texts by suicide risk — identifying 86% of people at severe imminent risk in their first message. The World Food Programme uses predictive analytics to forecast hunger trends up to 60 days in advance across 90+ countries. RebootRx uses AI to sort through hundreds of thousands of research papers to identify generic drugs for cancer treatment. The Stanford Social Innovation Review's landscape analysis of nearly 100 AI-powered nonprofits found four major categories: structuring data, advising, translating, and building platforms.

For most small to midsize nonprofits, the practical starting point is generative AI applied to communications and operations. Here's what that looks like in practice:

  • Drafting and iteration. First drafts of grant proposals, donor appeals, newsletters, reports, social media posts — anywhere you need a starting point faster than a blank page. Instrumentl's 2025 survey found that 71% of nonprofits using AI can write and submit a grant proposal in under a week, and 46% are using AI primarily for drafting. The output still needs substantial editing, but the blank-page problem disappears.
  • Personalization at scale. Customizing donor acknowledgments, email campaigns, or outreach messages across hundreds of recipients without writing each one individually. Stanford Social Innovation Review highlights how machine learning can analyze donor databases and deliver targeted appeals — helping organizations like East Cheshire Hospice send 58% fewer letters while raising 20% more revenue. This is one of the clearest wins in nonprofit AI right now.
  • Research and synthesis. Summarizing long documents, compiling prospect research, extracting key findings from reports, and pulling together background material from multiple sources. This is where the time savings are most substantial and reliable; it's not the flashiest use case, but the one that actually changes your workflow the most.
  • Data structuring and cleanup. Formatting messy spreadsheets, standardizing inconsistent records, categorizing large datasets, and preparing data for analysis. For nonprofits whose donor records live in multiple spreadsheets and an executive director's inbox. AI tools can process and standardize records that would take staff days to clean manually.
  • Pattern detection. Identifying trends in donor behavior, flagging at-risk clients, forecasting demand for services, and surfacing insights from data too large to review manually. This is also where many program-side AI initiatives live, and where the stakes, complexity, and expertise requirements escalate significantly.

Everything above describes AI that generates output for you to review. The next wave is AI that takes actions: browsing the web, executing code, sending emails, updating databases, and interacting with other software on your behalf. These are called AI agents. The shift from "AI drafts, you decide" to "AI acts, you supervise" is happening now, not in some theoretical future. For nonprofits, this means the question isn't just "what should AI write for us" but "what should AI be allowed to do for us — and who's responsible for the outcomes?" The capabilities are real and growing. So are the risks.

You don't have to adopt agentic AI for it to show up in your workflow. The vendors behind your CRM, your email platform, and your fundraising tools are already building agents into their products. One morning, you'll click 'update', and it will just be there. You are going to be affected by the agentic era of AI, and this rapid acceleration of technological advancements, whether you go looking for it or not.

For sixty years, the tech industry ran on a pattern called Moore's Law: computing power roughly doubled every two years. That single pattern gave us smartphones, the internet, and every piece of technology your organization runs on today. AI is now on a similar curve, but moving more than three times as fast. Research from METR (Model Evaluation and Threat Research) found that the length of tasks AI agents can complete reliably has been doubling roughly every seven months and that pace is speeding up. In 2019, agents could handle tasks that took a human a few seconds. By early 2025, they're managing tasks that take about an hour. If the trend holds, we're looking at agents that can run multi-day projects with minimal oversight within the next 18 months. There are a lot of distribution and adoption factors involved, but that's not a theoretical far-off future; that's the next budget cycle.

Measuring AI Ability to Complete Long Tasks

What should humans keep doing?

↑ Table of Contents
AI can inform a conversation, but it can't sit across from a donor and mean it.

A better starting point: what can they do, and what will even be an option once this agentic technology takes off? Stanford economist Erik Brynjolfsson argues that as AI gets better at execution, the economic value shifts to the people who know what to ask for and whether the result is any good. That's true, but I think it doesn't go far enough, especially for mission-driven organizations.

I think the picture is starker than that. As we enter the agentic era, I believe human value in work narrows to five things: money, authority, physical labor, motivation, and trust. Money is budget ownership and the ability to fund tools, people, and programs. Authority is the power to set policy, approve use cases, and assign accountability. Physical labor is anything embodied, like program delivery, events, and health-adjacent work. Motivation is the desire that initiates the goal — the moral judgment and community accountability that define why the work is important in the first place. And trust is the fact that humans are wired to engage with other humans — to hold someone responsible, to look someone in the eye, to believe that the person across the table has skin in the game. AI can inform a conversation, but it can't sit across from a donor and mean it.

Most jobs touch more than one of these categories, and AI is compressing all of them — not overnight, but faster than I think most people are expecting. The difference is which ones you can actively build toward. Money, authority, and physical labor have ceilings that are harder to push. Trust and motivation are the lanes with the lowest barriers to entry.

AI is not going to independently champion pediatric pain education, or food access in rural counties, or refugee legal aid any time soon. It doesn't want anything it isn't programmed by someone to want. The thing that motivates you to care about your cause is the last thing technology will be designed to replicate. And trust — from your community, your funders, your board, your boss, your team — is something AI can't earn or hold. When an AI tool produces a biased recommendation or a hallucinated statistic that damages a relationship, someone has to answer for it. That someone is always a person. Liability doesn't land on software. It lands on you.

The barriers between caring about something and actually doing something about it are getting lower by the month. Trust and motivation are two things you can actively pursue and build a career on — and for mission-driven work, they're the lanes that matter most.

But even as AI lowers those barriers, don't mistake new capability for more free time. Anyone promising you'll "get back half your work week" is selling something. My own experience, while anecdotal, is that AI didn't give me my time back. I'm working more than ever, not less, because I can pull off things that weren't possible for a team of one before. A first draft that would have taken a day takes an hour — so I don't log off early. I do four things instead of one. And if I can move faster, so can everyone else, which means nobody gets to slow down. Maybe one day we will have worked out the economics of universal basic income that allows people to work less, but the market isn't there yet.

Anyone promising you'll 'get back half your work week' is selling something.

Alongside knowing where humans might be able to thrive in this upended job market, you also need to know where the current lines are drawn. Some rules of thumb for today's landscape:

  • Don't use AI when the relationship is the work — major donor cultivation, board engagement, community listening sessions, family outreach. AI can help you prepare, but it should not generate the conversations itself. The Stanford Social Innovation Review's analysis of AI in donor engagement draws this line explicitly: AI for preparation and data, humans for the conversation.
  • Don't use AI when accuracy is high-stakes and unverifiable — legal compliance language, medical or health claims, specific statistics in public-facing documents. If the cost of an error is a lawsuit, a health risk, or a destroyed relationship, the content needs human verification at every level.
  • Don't use AI when voice and authenticity carry the message — your executive director's personal appeal letter, a statement responding to a community crisis, a tribute to a beloved volunteer. An AI version might be technically competent. It will also be obviously hollow. The Donor Perceptions of AI 2025 study found that 34% of donors ranked "AI bots portrayed as humans" as their top concern — perception of inauthenticity damages trust even when the content itself is fine.
  • Don't use AI when you can't review the output. If you don't have time or expertise to fact-check, voice-check, and approve what AI produces, don't publish it. "AI wrote it, and I didn't have time to review it" is not a defensible position.

A useful test across all of these: does this use of AI improve human connection or replace it? Will stakeholders feel more valued or less valued? If the answer is “replace” or "less," you have your answer.

AI Changed Work Forever in 2025
Erik Brynjolfsson, director of the Stanford Digital Economy Lab breaks down how AI is disrupting work.

What problems does AI create?

↑ Table of Contents
Banning AI drives it underground. A policy, approved tools, and permission to experiment within boundaries will do more than any prohibition.

Every tool creates new problems. AIs are worth naming early because they scale fast and aren't always obvious until they've already done damage.

  • Bias amplification. This goes first because it's the risk most likely to undermine your mission, and the hardest to see happening. A 2024 UNESCO study found women associated with domestic roles at four times the rate of men across major language models. A University of Washington study found resume-screening AI favored white-associated names 85% of the time and never once preferred Black male names. For communications work, the danger is usually subtler than something overtly offensive. It's what's missing: perspectives that are left out. Communities are getting described in language that doesn't reflect how they describe themselves. Grant language frames populations as problems to be solved rather than partners in the work. The output looks clean and professional, and when the person reviewing the AI's work isn't from those communities, it's even harder to catch.
  • Hallucination. AI will confidently fabricate citations, invent statistics, and produce content that sounds authoritative and is completely wrong. It doesn't know when it's lying. A 2025 Deakin University study found that GPT-4o fabricated roughly 20% of academic citations outright, with 56% of all generated citations containing errors. A peer-reviewed study in Nature Scientific Reports found GPT-3.5 fabricated 55% of citations and GPT-4 fabricated 18%. Even though models get more and more sophisticated every month, it's still necessary to verify AI outputs against an original source. No exceptions.
  • Shadow AI. Your staff is already using AI without telling you. They're drafting emails in ChatGPT, running donor lists through free-tier tools, and pasting client notes into Claude. Vital City's survey of 1,200+ nonprofit staff found that 54% had already used AI at work, but only 11% had received any guidelines about what's permitted. That's a leadership vacuum. And blocking it is effectively impossible — even the most sophisticated corporate security programs can't track an employee using Meta smart glasses, snapping a phone photo of a screen to send to Claude, or dictating into an AI wearable. If enterprise IT departments can't contain it, a five-person nonprofit isn't going to either. This is an honor system, whether you design it that way or not. Banning AI drives it underground. A policy, approved tools, and permission to experiment within boundaries will do more than any prohibition.
  • Erosion of institutional skill. The tasks AI automates are often the same tasks that train junior staff. First drafts teach writing. Research synthesis builds subject knowledge. Skip those reps, and you get faster output now, but a less capable team later. This isn't hypothetical — a 2025 study published in The Lancet of Gastroenterology & Hepatology found that endoscopists who routinely used AI assistance performed measurably worse when that access was removed.
  • Dependency without understanding. If your team uses AI to produce outputs they couldn't evaluate on their own — grant narratives in unfamiliar program areas, data interpretations they lack context to verify — you've created a black box you depend on but can't audit. The Communications of the ACM calls this the "deskilling paradox": AI makes senior experts faster while cutting off the learning pipeline for the people coming up behind them.
  • Data exposure. Content entered into consumer AI tools may become training data for future models. Donor information, client records, internal strategy. If it goes into a free-tier tool, you don't fully control where it goes. OpenAI's enterprise privacy page states that business and API data is not used for training by default — but consumer-tier ChatGPT has different terms, and Anthropic announced in 2025 that consumer chats will be used for training unless users opt out. The distinction between consumer and enterprise tiers matters enormously, and most nonprofit staff are using the consumer version.
  • Synthetic content and deepfake threats. Check Point Research found that nonprofits and associations experienced 2,550 cyberattacks per week in November 2025 — a 57% year-over-year surge, making them the third-most-targeted sector globally. Nearly 83% of phishing emails now contain AI-generated elements, and 60% of nonprofits have reported experiencing a cyberattack in the last two years while 68% lack a documented policy for responding. And these threats are evolving — the OWASP Top 10 for Agentic Applications identifies emerging risks like goal hijacking and unauthorized system access in AI agents. If your CRM vendor or email platform is incorporating AI agents, you inherit those risks whether you choose them or not.
OWASP GenAI Security Project Releases Top 10 Risks and Mitigations for Agentic AI Security
Culmination of over 100 industry leaders’ input and extensive published resources to deliver critical guidance to address Agentic AI Security risks WILMINGTON, Del. — Dec. 10, 2025 — The OWASP GenAI Security Project (genai.owasp.org), a leading global open-source and expert community dedicated to delivering practical guidance and tools for securing generative and agentic AI, […]

Are you using AI to avoid hiring?

↑ Table of Contents
There's a real difference between using AI to expand what a capable team can do and using AI to cover for the fact that you need more people.

Here's a pattern: an organization adopts AI to compensate for what's actually a capacity, clarity, or management problem.

Ask: if we had one more skilled staff member, would this problem disappear? If we documented our existing workflows, would things run more smoothly? If leadership were aligned on priorities, would the bottleneck clear? If yes, AI isn't your first move. Fix the foundation, then decide if AI accelerates what's already working.

There's a real difference between using AI to expand what a capable team can do and using AI to cover for the fact that you need more people. The first one works. The second one burns out the people you have. The gap is often managerial, not technological.

Is your nonprofit ready for AI?

↑ Table of Contents

Readiness isn't only about technical sophistication. It's about six things:

  • You have someone whose job this is. AI is one of the few disruptive technologies that's simultaneously a consumer product and an enterprise tool. Because anyone can open ChatGPT and get impressive results in ten minutes, it's easy to mistake personal fluency for organizational expertise. The most enthusiastic person on your team is a real asset — but enthusiasm is a starting point, not a qualification. You need someone who's gone deep enough to know what the tools can't do, not just what they demo well. That might be a staff member with a genuine mandate to go beyond personal use into organizational strategy, a dedicated hire if the resources are there, an agency, or a consultant.
  • You know where your people stand. The CEO is convinced AI will transform everything by Q3. Frontline staff are quietly dreading the next all-hands about it. There's grumbling and foot-shuffling every time a manager brings it up. Some people are ethically opposed to using any AI at all. All of those are data points, not obstacles. This is a change management problem, and you can't manage change you haven't diagnosed. If leadership is pushing faster than people can absorb, you get resistance. If staff is ready but leadership doesn't care, you get people experimenting in the dark. Figure out where everyone actually stands before you decide where to push.
  • You have clean, accessible data. If your donor records live in three spreadsheets and an executive director's email inbox, AI tools will underperform dramatically.
  • Leadership is on board with realistic expectations. Not "AI will transform our fundraising." More like "we'll invest 3-6 months in learning, accept some failed experiments, and measure honestly."
  • You have a tolerance for imperfect output. AI-generated content requires human review. Every time. Working with AI means iterating until you get it right, which means tolerating failure along the way.

Do you need an AI policy first?

↑ Table of Contents

Yes. 76% of nonprofits currently have no AI policy. Meanwhile, 82% are already using AI tools ad-hoc. That means four out of five organizations have staff using ChatGPT right now, with no shared understanding of what's appropriate.

Your policy needs to answer four questions: What tools are approved, and by whom? What data should never enter AI tools? What's our review process before AI-generated content goes out? And how do we handle it when something goes wrong?

Write it first. It doesn't have to be perfect. It just has to exist. NTEN's AI Governance Framework offers templates, and Fast Forward's Nonprofit AI Policy Builder is a free interactive tool that walks you through creating one.

2026 AI Marketing & Fundraising Statistics for Nonprofits
These AI marketing and fundraising stats can help your inspire your nonprofit to adopt the use of generative AI for marketing and predictive AI fundraising.

What does AI actually cost?

↑ Table of Contents

The subscription isn't the expensive part. It rarely is. Here's what practitioners consistently report:

  • Implementation time: Structured programs like FCNY's AI for Nonprofits Sprint, ask staff to commit 1-2 hours per week just for learning, over 4-6 months.
  • Quality review: Everything AI produces needs human review. Budget 15-30 minutes of review for every hour of AI-generated output.
  • Rework: AI doesn't always produce usable output on the first try. Sometimes it takes 3-4 iterations. Sometimes you start over. The productivity gains are real but not uniform.
  • Integration: If you want AI to work with your CRM, email platform, or project tools, integration isn't always free or automatic. Some require higher-tier subscriptions and humans to manage the workflows.
  • Opportunity cost: Time spent learning AI is time not spent on other things. For a small team, that tradeoff is real — every hour someone spends experimenting is an hour they're not spending on the work that's already in the queue.
  • The costs you can't see yet. The market is pushing adoption faster than regulation, faster than best practices, and faster than anyone can predict the consequences. We're in a window where the technology is available, but the rules are still being written, and the organizations that don't think about the liability that creates for them will pay the most when the dust settles. Document your decisions and know why you made them.
Welcome to the Napster Era of AI | Future Literate
Every tech cycle has a window before the rules catch up. AI’s window is open now. What that means for mission-driven organizations trying to navigate it responsibly.

Can AI write your grants?

↑ Table of Contents

This is the use case everyone asks about first, so let's be direct: AI can draft and iterate on grant proposals. It can't write one that's ready to ship.

A first draft that would take a day might take an hour or two with AI. That's real. But drafting was never the hard part of grant writing; strategic alignment, funder-specific framing, accurate budgets, and authentic storytelling are. AI doesn't do any of those reliably. Once you factor in editing, fact-checking, and revision, total time savings land closer to 25-40%, not the 80% the vendor demos suggest. And the funder landscape is still shifting: according to Candid's 2024 Foundation Giving Forecast Survey, only 10% of funders explicitly accept AI-generated applications, 23% reject them outright, and 67% haven't decided yet. Know your funder before you automate.

Where AI genuinely shines in the grant process isn't the writing, it's the research. Summarizing long documents, extracting key data points from 990s, compiling prospect research, and comparing funder priorities across multiple opportunities. Practitioners consistently report the most substantial and reliable time savings here, and the quality risk is lower because you're synthesizing information, not generating a narrative that has to sound like your organization.

Where do foundations stand on AI-generated grant proposals?
Candid insights | With the rise of generative AI, see where foundations stand on accepting AI-generated grant applications and proposals based on new data-driven insights from Candid.

How do you evaluate AI vendors?

↑ Table of Contents
Nobody alive has navigated this before, which means anyone who sounds overly confident about where this is heading is selling you something, not telling you something.

The AI market right now is a gold rush, and you're the territory. We are on an acceleration curve of technology that has no historical precedent — not the internet, not mobile, not social media. Nobody alive has navigated this before, which means anyone who sounds overly confident about where this is heading is selling you something, not telling you something. That's worth remembering every time a vendor demos their "nonprofit AI solution".

Here's what's happening on the ground: the current moment is producing a short-term surge of software, products, services, and paid content built on where the technology is right now. Many of them won't be relevant in 18 months. Some are genuinely useful tools solving real problems. Many more are what the industry calls wrappers — a branded interface built on top of capabilities that already exist in general-purpose tools like ChatGPT or Claude. Wrappers aren't inherently bad. The good ones add real value: they simplify complex tools for non-technical users, build in compliance guardrails, or integrate AI into workflows your team already uses. The bad ones just repackage what you could do yourself and charge you for the convenience of not knowing that. The difference between the two is the entire reason you need someone evaluating these tools who understands what the underlying technology already does for free. This isn't unique to AI, it happens in every technology boom, but the speed of this one means the shelf life of these products is shorter than ever.

That said, "what can you do that ChatGPT can't?" is not the killer vendor-interrogation question it sounds like. If the answer isn't obvious to you before you ask it, that tells the vendor more about your knowledge level than their answer tells you about their product. And that makes you an easier sale. The better move is knowing enough about what general-purpose tools can do so that you never have to show your hand during negotiations, and instead, you can focus on the questions that actually protect you. This is exactly why surface-level AI familiarity isn't enough when real money is on the table, and why the person evaluating vendors on your behalf needs a deeper understanding, not just enthusiasm.

If a tool does earn a closer look, here's what to actually ask:

  • How does this connect to what you already use? Integration is everything. Ask for specifics."We integrate with Salesforce" could mean native integration or a $200/month workaround. If the tool doesn't talk to your existing systems, you're adding manual data transfer to someone's plate, not saving time.
  • What happens to your data? If you cancel, can you export everything? Can the vendor use your content to train their model for other clients? Read the terms of service; the distinctions between consumer, business, and enterprise tiers matter enormously for data handling.
  • What is your security posture? Where is your data stored? Who has access? Is it encrypted in transit and at rest? Do they hold SOC 2 or equivalent compliance? How do they handle a breach? If the answers are vague, that tells you everything. The sensitivity of your data should dictate the rigor of this conversation; a donor database and a public newsletter archive are not the same risk profile.
  • Who is maintaining this product? We're in an era of "vibe coding", where AI makes it possible to build a functional-looking product in a weekend. That means the market is flooded with tools that were easy to launch and will be hard to maintain. Ask how large the engineering team is. Ask about their update cycle. Ask what happens to your data if they shut down. A polished demo is not evidence of a sustainable product.
  • Will this tool be relevant in two years? Given the pace of change, any product built on today's AI capabilities could be obsolete by the time you've finished onboarding. Ask the vendor what their roadmap looks like and how they're adapting as the underlying models improve. If their entire value proposition is something a general-purpose tool will likely do natively in 18 months, you're renting a solution with an expiration date.
  • Who else like you is using this? Ask for references from similar organizations. Then call them. It's fair to work with companies that are new to this space; the technology is young enough that everyone's track record is short. But there's a difference between a vendor that's honest about where they are and one that's papering over inexperience with a polished demo. Know which one you're talking to.
  • Could you build this yourself? Depending on your organization's size, data sensitivity, and risk tolerance, the answer might be yes. General-purpose AI tools are powerful enough now that many products are things a knowledgeable person could replicate with ChatGPT, Claude, or open-source tools, without handing your data to a third party. That's not always the right call, but it should be considered before you sign a contract.

Does prompting actually matter?

↑ Table of Contents

Yes, but not the way most people have been learning it. The internet is full of prompting advice and products that include magic phrases, special formats, and tricks to unlock better AI output. Most of it is already outdated, and the rest oversells what phrasing alone can do. The difference between a terrible prompt and a decent one is enormous. The difference between a decent prompt and a carefully worded one is marginal. The straightforward prompt advice gets you most of the way there: be specific, provide context, give examples of what you want.

What actually moves the needle is something the industry now calls context engineering, and it matters far more than how you phrase your request. Context engineering is about what information you bring into the conversation, what you don't, and how you manage it. AI doesn't know your organization, your brand voice, your funder's expectations, or your program model unless you give it that information. The quality of what goes in determines the quality of what comes out, and "write me a grant proposal" with no context will always produce generic output, no matter how cleverly you word the ask.

The real investment isn't learning prompt formulas. It's building the habits and infrastructure that make every conversation better.

If you ask me, the chat box is the least interesting thing AI can do. Most people's mental model of AI is a text conversation: you type, it responds. That's the starting point, not the ceiling. Current tools can search the web in real time, read and analyze documents you upload, write and run working code, build interactive applications, create formatted reports and slide decks, and connect to other software your organization already uses. The practical skill isn't learning to talk to AI better. It's learning which capability to reach for. These decisions shape your output more than any prompt formula.

With that reframe in mind, here are the practical things that actually move the needle:

  • Build documentation that the AI can use. Brand voice guides, program descriptions, grant templates, donor personas, and style guides. Material that either gets pulled in when the AI needs it or that you bring in deliberately, so it focuses on what matters. IBM's 2025 CDO Study found that organizations with better-structured, more accessible data deliver measurably higher ROI on AI initiatives, and in my experience, this is the single biggest differentiator between organizations getting generic output and organizations getting something they can actually use.
  • AI memory requires active management. Most major platforms now offer memory features, but they don't work the way people expect. MIT's GenAI Divide report found that employees routinely use AI for simple tasks but abandon it for mission-critical work, and the core reason is what the researchers call the "learning gap": AI systems that don't retain context, adapt to workflows, or improve over time. Memory isn't automatic; it's only as good as the time you invest in setting it up and reinforcing it. You need to tell the AI what to remember, correct it when it gets things wrong, and periodically review what it's stored. Left unmanaged, memory fills up with low-value noise and misses what actually matters. Think of it less like human memory and more like a filing system; if you don't organize it, it's just a drawer full of loose papers. The organizations that will get the most out of AI treat memory setup as real work, not an afterthought.
  • AI conversations degrade. Every AI tool has a context window — a limit on how much information it can hold in a single conversation. Research confirms this is a fundamental architectural limitation, not a bug: a landmark study in Transactions of the ACL found that AI performance degrades significantly as conversations grow longer, with models struggling most to use information stuck in the middle of long contexts. As the chat gets longer, the AI loses track of earlier instructions, contradicts itself, and produces lower-quality output. This happens much sooner than most people expect, and the AI will still sound confident while getting worse.
  • So - Abandon chats early. The instinct is to keep going — you've built up all this context, why start over? But past a certain point, you're fighting diminishing returns. Plan for 2-5 conversations to complete a single project, not one. When you switch between types of work, like research to drafting, drafting to editing, editing to polishing, those are each usually a new chat. A grant proposal might be three conversations: one to research and outline, one to draft, and one to refine. Shorter, focused conversations consistently outperform long, sprawling ones.
  • Harvest context before starting fresh. Save the AI's best outputs, your refined instructions, and key decisions to a document outside the chat. Then start a new conversation with that harvested context loaded in. Clean window, all the important information, none of the accumulated noise.
  • Reuse chats for repetitive work. If you need to do the same task twenty times with slight variations, you don't have to rebuild context every time. Most AI tools let you edit a previous message, which creates a conversation branch. The AI then regenerates its response as if your original message never existed, but everything before that point stays intact. Set up your context, instructions, and documentation in the first few messages, then edit your last request with each new variation. The setup persists, the AI gets a clean prompt each time, and you avoid the quality loss that comes from a conversation bloated with twenty rounds of back-and-forth. One setup, twenty outputs.
  • Don't overfill the window. More context isn't always better. Practitioners call this problem context rot, when you load so much information that the AI's attention gets diluted, critical details get buried, and output quality drops even though there's technically room left. Drew Breunig's research on how long contexts fail documents cases where models performed worse with more tools and documents loaded, even well within the context limit, because the model has to pay attention to everything you give it, whether it's relevant or not. Projects, memory features, and uploaded documents all help distribute context across conversations, but for any single task, load only what's critical into your chat.
  • Use extended thinking mode when it's available. Some AI tools let you see the model's reasoning process, the chain of thought it follows before it gives you an answer. Anthropic calls this extended thinking, and a joint safety paper from OpenAI, Anthropic, and Google DeepMind argues that this kind of visible reasoning is one of the most practical transparency tools available in AI right now. It is enormously useful. You can catch the AI making assumptions, spot where it's fabricating information instead of searching for it, and see whether it's actually engaging with your question or confidently making things up. It also gives you hints about how to improve your next prompt. If the AI misunderstood your intent in its thinking, you know exactly what to clarify. Not every tool offers this, but when it's available, turn it on.
The less you know about a topic, the more verification you need.

And here's what almost no prompting guide tells you: your expertise relative to the AI's should change how you interact with it. AI models are strongest on well-documented subjects like marketing, grants, and systems. They're weakest when it comes to niche, local, or specialized knowledge. When you're working in your area of expertise, you'll catch mistakes because you know what right looks like. When you're working outside of your area of expertise, you won't catch them. That dynamic, when the AI is more of an expert than you are, is when AI hallucinations are most dangerous. The less you know about a topic, the more verification you need, and the more documentation you should be bringing into the conversation to compensate.

Effective context engineering for AI agents
Anthropic is an AI safety and research company that’s working to build reliable, interpretable, and steerable AI systems.

What happens to your data?

↑ Table of Contents

When you type into ChatGPT or Claude, you're transmitting data to a third party. Most consumer AI tools use your inputs to improve their models by default — and most users never change that default.

The tier you're on determines the rules, and "paid" does not mean "protected." Individual subscriptions are consumer plans with consumer data handling. The protections most people assume come with paying only kick in at business and enterprise tiers. If your staff is using the free or personal-subscription version of any AI tool, the data handling rules are less protective than they think. Here is a starting point for AI data hygiene.

  • Donor information should never go into consumer AI tools. Names, addresses, giving history, and personal notes. That is sensitive data your donors entrusted to you.
  • Client data requires even more care. If you serve vulnerable populations, the information you hold is especially sensitive. Consumer AI tools have no business holding that information.
  • Practical protections: Use business-tier tools, not just paid tiers, for sensitive work. Create guidelines for what can and cannot be entered. Consider local AI models (LM Studio, Ollama) that run on your computer and don't transmit data.

How do your donors feel about AI?

↑ Table of Contents

The Donor Perceptions of AI 2025 study found that 43% of donors say AI use would have a positive or neutral effect on their giving, but 31% said they'd be less likely to donate. 92% want transparency about how organizations use AI. And 34% ranked "AI bots portrayed as humans" as their top concern.

Donors are more likely to accept AI for operational tasks than for relationship-facing communications. The closer the communication is to a personal relationship, the higher the expectation of human involvement. Notably, more generous donors tend to be more supportive of AI use, but the expectations around transparency are universal.

The authenticity test: if this specific donor knew how this message was created, would they feel valued or processed? For a first-time $25 donor receiving a template thank-you, AI drafting is probably fine. For a major donor who's given for 15 years, probably not. Match the investment in communication to the relationship.

Donor Perceptions of AI 2025 - Fundraising.AI
RESOURCES Donor Perceptions of AI 2025 Insights from 1,031 current US donors on the future of AI in philanthropy Overview In 2025, donor perceptions of AI have clearly matured. What was once defined by uncertainty and fear of dehumanization has become a more thoughtful balancing act between curiosity and caution. Donors now approach AI with a […]

Should you disclose AI use?

↑ Table of Contents

The legal landscape is moving faster than most nonprofits realize. There's no single federal AI disclosure law in the U.S. yet, but state-level requirements are multiplying. Multiple states have already enacted laws requiring businesses to disclose when consumers are interacting with AI rather than a human. Maine's took effect in 2025, and Colorado's AI Act takes effect in June 2026 with requirements covering AI systems that influence consequential decisions in areas like healthcare, education, and employment, domains where nonprofits frequently operate.

Disclosure strategy isn't one-size-fits-all. The right approach depends on your audience, your use cases, your state's legal requirements, and how central trust is to your relationship with the people you serve. What matters is that you've thought it through deliberately.

When AI affects client services like triage, risk scoring, or service recommendations, disclosure shouldn't be optional. It should be an ethical baseline. The EU AI Act's transparency rules take effect in August 2026, and state-level regulation in the U.S. continues to expand. Even if your state hasn't passed an AI law yet, the direction is clear.

Is AI aligned with your mission?

↑ Table of Contents

This is where the change management work gets real. Every issue below is something your staff, your board, and your community already have opinions about, whether they've voiced them or not. Three rise to the top of mind for nonprofits:

  • Bias and representation. AI models inherit biases from training data. For nonprofits serving diverse communities, this is a direct mission concern. Bias in a fundraising email means your language doesn't land. Bias in a predictive model that determines service eligibility means some communities get less help. The fix is not just human review, it's review by people who come from or genuinely represent the communities you serve. You cannot review your way into giving AI a perspective you don't have. For broader audit and oversight frameworks, the NIST AI Risk Management Framework and Data & Society's work on AI in social services are strong starting points.
  • Labor and extraction. AI models are trained on content created by humans, often without compensation or consent. The infrastructure consumes significant resources. Data centers consumed over 400 TWh of electricity in 2024, roughly 1.5% of global demand, and a December 2025 study in Patterns estimated AI systems alone may have a water footprint comparable to all bottled water sold globally. For mission-driven organizations, there's a question of alignment: does using these tools support or undermine the values you put in your mission statement?
The skills don't disappear all at once. And by the time you notice, you've become dependent on the technology that was supposed to advance your organization.
  • Contributing to Institutional Skills loss. AI makes your team faster. It can also make them worse. As staff offloads cognitive work to AI, they stop exercising the skills that made them good at that work in the first place. The output looks fine today. But the capacity behind it is eroding. Jacob Sherson, director of the Center for Hybrid Intelligence at Aarhus University, warns that this kind of erosion "will be visible only in hindsight." For nonprofits, the long-term consequences are specific and serious. When a grant writer can no longer write without AI, you don't have a grant writer anymore; you have a context engineer who might become less skilled than the model and won't catch when they misrepresent your work. The skills don't disappear all at once. And by the time you notice, you've become dependent on the technology that was supposed to advance your organization.

What if you don't adopt AI?

↑ Table of Contents
Adopting AI badly is worse than not adopting it at all.

The risks of adopting AI get most of the attention. But there are also real costs to standing still, and they compound faster than most nonprofit leaders may expect.

  • Efficiency gap. A nationally representative survey found that generative AI usage is highest in finance, insurance, and real estate (51%) and far lower in education and health services, sectors where nonprofits concentrate. Within the nonprofit sector itself, larger organizations with budgets over $1 million are adopting AI at nearly twice the rate of smaller ones (66% vs. 34%). In previous technology waves, nonprofits could afford to move slowly. The technology advanced at a pace where catching up a few years late was uncomfortable but survivable. AI is not moving at that pace. Every month of delay isn't one month behind. It's one month behind on a curve that's getting steeper. The window where slow adoption was a minor disadvantage is closing. For under-resourced organizations, that widening gap means slower operations and losing ground on funding, talent, and visibility to organizations that figured it out first.
  • Talent expectations. Incoming staff increasingly expect AI tools at work. Pew Research Center found that 21% of U.S. workers now use AI in their jobs, up from 16% a year earlier, with that number rising fastest among college-educated workers. Organizations that ignore AI may find it harder to recruit and retain skilled communicators and fundraisers.
  • Funder expectations. The direction is clear: funders expect organizations to be thoughtful about technology. As the funder data above shows, the landscape is still unsettled, but "we don't use AI" will start to sound less like a principled stance and more like "we don't have a website" did in 2015. AI is already part of the operating environment, not a future trend, and the cultural shift is already underway. Organizations that haven't engaged will increasingly look like relics of a previous era, not cautious, just behind.
  • Mission impact. Here's the part that doesn't get said enough: you don't get to opt out of AI. You can opt out of actively adopting it. You can decide not to buy tools, not to change your workflows, not to engage. But the communities you serve are already being affected by AI in the healthcare systems they navigate, the benefits applications they file, the content they consume, the scams they encounter, and in the hiring algorithms that screen them. As the Black Women's Health Imperative put it in their 2025–2026 National Health Policy Agenda: AI is not just a technology issue — it is a health equity issue, one that is already shaping healthcare decisions, insurance systems, and public health outcomes for the communities least likely to have a seat at the table where these tools are designed. Your donors are receiving AI-assisted communications from other organizations. Your funders are developing AI positions. Your vendors are embedding it into the tools you already pay for. The question isn't whether AI touches your mission because it already does. The question is whether you understand how, and whether you're equipped to help your community navigate it. If AI can help you reach more people, raise more money, or tell your story more effectively, and you choose not to explore it, that may have consequences for the communities you serve.

But adopting AI badly is worse than not adopting it at all. A hasty implementation that damages donor trust, produces inaccurate content, hurts the people you serve, or burns out staff creates problems you didn't have before.

Black Women’s Health Imperative Releases National Health Policy Agenda: Centering Black Women in a Time of Crisis and Change - Black Women’s Health Imperative
Since 1983, our national organization dedicated to health of our nation’s 21 million Black women and girls – physically, emotionally and financially.

What if you decide not to use AI?

↑ Table of Contents
Choosing not to buy AI tools is a defensible position. Choosing not to understand what's changing around you is not.

Then you've done the work, and that's a legitimate outcome.

Some organizations will run the numbers on implementation time, look at their current capacity, weigh the cultural cost of introducing tools their staff does not trust, and decide the math doesn't work right now. That's an organization that knows itself well enough to make a call instead of getting swept up in hype.

But here's what that decision can't mean: opting out of the conversation. As the previous section makes clear, AI is already affecting your community, your vendors, your funders, and your incoming talent, whether you're actively using it or not. Choosing not to buy AI tools is a defensible position. Choosing not to understand what's changing around you is not. Organizations that say "not now" still need to know what they're saying no to and what would make them revisit. And you should be revisiting, with the rate of AI sophistication, things may look entirely different from month to month. The AI question isn't going away; it's a market factor that is permanently on the field.

And if the reason is that the ethical concerns outlined earlier genuinely outweigh the operational benefits, that the labor practices, environmental costs, or bias risks don't square with your mission, then say that out loud. Your stakeholders deserve to hear the reasoning. The organizations that can articulate why they're waiting will earn more trust than the ones that don't.

How do you actually make this decision?

↑ Table of Contents
Wherever you land, land there because you chose it — not because you got swept up, pressured, or left behind.

There's no right answer that applies to every nonprofit. But there's a right process: educate yourself and your team beyond surface-level familiarity, figure out what you actually need, be realistic about what you can sustain, get expert guidance where the stakes are high, and decide what you're willing to risk.

The organizations that have the best shot at navigating this won't be the ones that adopt fastest or resist longest. They'll be the ones who treated it as what it is, a change management problem, and invested in genuine understanding before they committed.

Start learning now, and you will have a head start that compounds. If you wait, you may find it's gotten harder to catch up later. Wherever you land, land there because you chose it, not because you got swept up, pressured, or left behind.

I’m figuring this out in public — one essay at a time, for the people who want to get it right and not just get it fast. If that’s you, feel free to subscribe, and we’ll navigate it together.