"If an organisation used AI to help write this application, is that a problem?"

At present, most funders are not rejecting applications simply because AI was used. Some have already clarified that AI assisted drafting is acceptable, provided the content remains accurate and authentic. Others are still forming guidance.

So this is not, at this stage, a story about widespread penalisation.

It is something more significant.

It is a shaping moment — because AI may be one of the most important levellers we have seen in decades.

The Deeper Question

Because aside from this conversation, sits a deeper one we do not always name out loud. Why does funding so often struggle to reach the communities that need it most?

Charitable trusts frequently acknowledge that they do not receive enough applications from grassroots groups. They say they want to fund beyond the usual suspects. They want to reach the estate based youth club, the informal parents group, the community led mental health project meeting in a borrowed church hall. And yet, year after year, the application pile leans toward organisations that know how to navigate the system.

It is uncomfortable to admit, but our grant funding processes are not culturally neutral. They favour people who are already fluent in a particular kind of language. Strategic framing. Outcome measurement terminology. Theory of change diagrams. Structured narratives that move seamlessly from problem statement to measurable impact.

None of that is wrong. Clarity matters. Accountability matters. But the ability to speak that language is unevenly distributed.

If you have worked in the charity sector for years, if you have sat on boards, attended conferences, been mentored by experienced leaders, or completed higher education, you absorb this vocabulary almost by osmosis. You begin to understand how funders think, how they want information presented, what phrases signal credibility.

If you are a mum running a brilliant after school club on a housing estate, you may not.

Your work may be life changing. Your paperwork may not be polished.

And too often, the system quietly confuses the two.

Sometimes it genuinely feels like you need a degree just to understand the funding landscape. Not because funders intend to exclude, but because over time the ecosystem has developed its own dialect, its own codes of belonging. Those who speak it fluently move more easily through the gates.

This Is Where AI Changes Something Fundamental

When someone uses AI to support a funding application, they are not outsourcing their vision. They are not fabricating impact. They are translating lived experience into formal language. They are accessing knowledge that has historically sat with consultants, professional bid writers, or well resourced organisations. They are learning how to structure their ideas in a way that will be understood.

AI does not create the community club. It does not run the session. It does not build trust with families. It simply helps the leader explain what they are already doing in a way that aligns with funding expectations.

For someone without a university education, without a background in policy, without the confidence that comes from years of navigating institutions, AI can function as a bridge. It can explain what a logic model is. It can suggest how to articulate outcomes. It can help clarify safeguarding processes in clear language. It can transform "we help kids stay out of trouble" into a structured explanation of preventative impact.

That is not cheating. That is access.

There is sometimes a fear that AI will make applications sound the same — that authenticity will be lost, that we will end up funding whoever has the most technically polished submission. But we must be honest with ourselves. We have already been funding polish. We have already been rewarding those who can afford bid writers or who have the social confidence to present themselves in a particular way.

AI does not introduce inequality into grant writing. It exposes the inequality that has always been there — and begins to soften it.

If we are serious about equity, if we genuinely want money to reach working class communities, racially marginalised neighbourhoods, informal grassroots groups and people building something from nothing, then we cannot be suspicious of the very tools that reduce the confidence and knowledge gap.

In fact, we should be welcoming them.

Imagine if funders said clearly that AI supported applications are acceptable. That what matters is the integrity of the work, not whether the first draft was written alone. That if technology helps an organisation articulate its impact more clearly, that is something to celebrate rather than question.

Such a stance would not lower standards. It would widen access.

Because the real assessment question is not "Who drafted this sentence?" The real question is "Is this work needed, is it credible, and will it change lives?"

The mum on the estate should not lose out because she does not instinctively use the phrase "theory of change." If AI helps her explain the transformative work she is already doing in language that middle class funding panels recognise, that is not manipulation. It is translation.

And translation has always been necessary in systems where power and resources sit unevenly.

If we believe in community led change, then we must also believe in equipping community leaders with the tools to navigate the systems that control resources. AI, used responsibly, is one of those tools.

Encouraging its responsible use could help shift who gets heard — and ultimately, who gets funded.

That feels like progress worth embracing.

Coming next

In part two, we look at how funders can actively design AI into their own processes — building equity in from the start, not as an afterthought.