Hello {{first_name}}, you're in great company. You're joining 3,181 charity and social impact leaders reading this each week, all of us working on the same craft - telling stories that build trust, unlock funding, and make positive change visible.
LAST WEEK’S POLL RESULTS
"When you re-read your last social post, line by line, how many sentences do you think were actually written for the same reader?"
Last week, after the edition on audience drift, I asked you to look at your own posts. The figures came in as a three-way tie:
All of them. The post stayed on one reader the whole way through: 0%
Most of them. One or two drifted, but the post mostly held: 33%
About half. The opener and the close are for different readers: 0%
Less than half. I can see now that I was writing for everyone: 33%
I have not re-read it that way before: 33%
Something else: 0%
What this tells us: Three-way tie, and the shape is the finding. A third of you saw your last post mostly hold. A third looked again and realised you had been writing for everyone. A third had never read your posts that way before. None of it is a craft gap. It is a practice gap. The drift hides in the queue. Reading it back is what closes it.
In this edition
WEEKLY POLL
Has someone on your team received a hateful comment in the last twelve months that they have not yet brought to the team?
Poll results will be shared in next week's edition.
THIS WEEK’S BIG IDEA
When the hate is personal
In my recent conversations with UK charity leaders, a different question keeps surfacing in conversations after the call has ended.
It usually arrives as a follow-up email, sent quietly, after the rest of the work has been agreed. The shape of it is always the same. Members of the team have started receiving hateful messages on LinkedIn, on Instagram, in the DMs of the charity's account or their own personal feeds. Sometimes a single individual targeting more than one of them. The team have handled the messages differently from one another, and that itself has become a problem. People are anxious. Nobody is sure what they were supposed to do.
Underneath that anxiety is a deeper one. When you talk publicly about the struggles of a community, hostile commenters can twist the same story into a negative reframe. That single move, repeated across the sector, is having a chilling effect on what charity and impact teams now feel safe to share at all.
This week's edition is about what to do before, during, and after that moment. I am going to tell you a story I have not yet told in this newsletter, because the protocol I now follow comes directly from getting it wrong the first time.
The Monday after
In 2019, at the height of the anti-LGBT+ protests outside primary schools in Birmingham and around the country, I spent a Friday in the north of England delivering the second of two days of school talks for my charity, Naz and Matt Foundation. I came home tired and proud. The talks had landed. The students had asked the questions you always hope they will.
On the Monday morning that followed, five parents from one of the schools went into the headteacher's office. They demanded to know why I had been allowed in to talk about LGBT issues. By that afternoon, a comment had appeared under one of our charity's Instagram posts. The comment was personal. Not the usual abuse aimed at the work, or at the position. Aimed at me. Specific enough that I read it several times to make sure I had read it right.
My heart felt heavy as I processed what I was reading.
By that point I had received hundreds of hateful messages over the years. Disgusting homophobic ones. Many of them attacking me directly for speaking about the events that led to the suicide of my late fiancé Naz. People hated me for having a voice. They hated me for talking about honour-based abuse. I had become accustomed to the volume in a way I am not proud of.
But this comment was different. It was so personal, and so detailed, that I did not know what to do with it. So I did the worst possible thing. I ignored it.
It kept bothering me. Day after day. I felt embarrassed to have received it, in the way you feel embarrassed when something private about you ends up in someone else's hands. I was a charity leader. I was supposed to be strong. I was supposed to know better.
Eventually I shared it on my personal Facebook with close friends. One of them, a serving police officer, told me very plainly that I needed to report it. The level of detail in the comment was not the abuse of a stranger. It was the abuse of someone who had been paying attention. She was worried about what else might be coming.
I reported it to Galop, the LGBT+ anti-abuse charity. They referred me to two police forces. The investigation that followed resulted in Instagram removing the comment at the request of the police. By that point, months had passed. The damage to me, and to the way I would work from then on, was already done.
That is the hidden cost most people will not see. It changed how Naz and Matt Foundation now delivers school talks. It changed the protections we now insist on for the speaker, the charity, the school, and the other pupils in the room. It changed how I write online, what I post, where I draw the line. It also, for a long time, changed how much of myself I let through to people I had not yet met.
Three things every charity leader needs to hold about hostile comments
First: most of the cost is invisible to you.
When a hateful comment arrives on a colleague's post and they do not tell you, the cost has already started.
You will not see it in your engagement dashboard. You will see it in a comms lead who suddenly posts less. A clinician who quietly stops giving permission for their cases to be shared. A frontline worker who declines to be in the next photo. The team responds to the comment by withdrawing from the work, and the work shrinks before you have noticed why. By the time the silence has a shape, you have lost months.
Second: hate does not always arrive looking like hate.
There are at least three forms of it, and the team needs to recognise all three.
The first is direct. Visible language, obvious intent, the kind that nobody on the team will mistake for anything else.
The second is passive-aggressive. Polite on the surface, designed to look reasonable to the algorithm and to bystanders, but engineered to wound the person it is aimed at.
The third is the slip-in. Someone offers support in your DMs, asks a thoughtful question, builds a small bit of trust, and then, once they are inside, turns.
The team that only knows what direct hate looks like will miss two thirds of what arrives.
Third: the heaviest cost lands on the people you most need to keep posting.
It is not random which team members get targeted. It is the ones whose lived experience, identity, or visibility makes them most useful to the cause and most vulnerable to it. The young trans volunteer. The lesbian CEO. The Muslim community organiser. The non-binary staff member who shares video updates. If the protocol your charity uses for hostile comments is "deal with it however feels right," then the people doing the most public-facing work are also the people absorbing the most cost, with no infrastructure behind them. That is unsustainable. It is also, quietly, unjust.
The reframe
The question your team should be asking is not "how do we get rid of the trolls?"
You will not get rid of them. They are part of the operating environment now. UK donations fell in 2025 for the first time since 2021. Six in ten charities are operating at a loss. Hostile rhetoric is rising in volume across the political conversation. The trolls are infrastructure too, in their own grim way.
The question is "what is our protocol, before, during, and after, so nobody on this team is handling this alone?"
That reframe changes the work. It moves the conversation out of the comms inbox and into the senior team. It treats hostile comments not as a content problem but as a safeguarding problem. It does not promise that nobody will ever be hurt by what arrives. It does promise that nobody will be hurt twice, once by the comment itself and once by the silence that followed.
The protocol below is the one I now use. It is the one I wish I had been handed in 2019.

Framework: The Hostile Comment Protocol
Three stages: pre-publish, at impact, aftercare.
Stage one. Pre-publish: close the door.
Hostile narrative most often weaponises the door you left open. The "they brought it on themselves" reframe works on stories that did not name the structural cause, that centred pity rather than agency, or that picked the wrong person to be the public face. Before you post, run three quick checks. Have you named the structural cause of the harm, not just the harm itself? Have you centred the agency of the people in the story, not their victimhood? Have you chosen a narrator whose visibility does not put them at additional risk? If you cannot answer yes to all three, the post is not yet ready.
Stage two. At impact: a calm playbook.
When a hateful comment arrives, the first move is not to reply. It is to tell someone in the team. Bring the comment to a colleague within the hour. Together, decide on the action. Five options exist: ignore, reply, report, block, or escalate to police. Save a screenshot before you delete anything. Do not reply in the heat. Be consistent across the team, so that nobody is left feeling they handled it the wrong way. The point is not that every comment gets the same treatment. The point is that the team handles every comment together.
Stage three. Aftercare: you do not absorb it alone.
Hateful messages have a cumulative cost. Name it. Bring it to the team. Take a break from the platform if you need to. The work is too important to lose people to silent burnout. Build into your monthly team rhythm a short check-in question. "Has anyone received a comment in the last four weeks that has stayed with you?" Most of the time the answer is no. The cost of asking when the answer is no is zero. The cost of not asking when the answer is yes is a colleague drifting out of public-facing work without telling you why.
Run all three stages, every time. You do not need a new policy document. You need the protocol in muscle memory, across the team, so that the next time a comment lands, the response is already underway before the comment has finished being read.

Template: The Hostile Comment Decision Sheet
Use this within the first hour of any hateful comment arriving. One sheet per comment. Pen and paper, Notion, a shared doc, a Slack thread. The format does not matter. The discipline does.
1. The comment, copied verbatim:
[Paste it here. Do not paraphrase. The wording matters for police reporting and platform appeals.]
2. Platform, date, and time received:
3. Who first saw it:
[The person who first received or noticed the comment. Mark whether they have been spoken to since.]
4. Type of hostility (tick all that apply):
Direct hate (visible language, obvious intent)
Passive-aggressive (polite on surface, hostile in effect)
Slip-in (started friendly, turned hostile)
Borderline / ambiguous
Targeted at a protected characteristic (race, religion, sexual orientation, gender identity, disability)
Includes a credible threat or specific personal detail
5. Risk level (low, medium, high):
[Low: one-off, generic abuse, no personal detail. Medium: repeated from same source, or escalating language. High: any specific personal detail, threat, or pattern of contact across more than one platform.]
6. Action agreed:
Ignore (no further action)
Reply (only if you have a clear, prepared response and it is not in the heat)
Report to platform
Block
Escalate (Galop / police / legal)
7. Who agreed the action, and when:
[At least two people. Never decide alone, especially for medium and high.]
8. Screenshot saved:
[Yes / No. If no, save one before any deletion.]
9. Aftercare flag:
[Does anyone on the team need a debrief, a break from the platform, or a follow-up check in the next two weeks? Name them.]
10. Anything to change before the next post:
[Did the post leave a door open the comment exploited? If yes, what changes for next time.]

AI Prompt: The pre-publish hostile-narrative checker
Use before publishing any post about a community in struggle, a beneficiary's experience, or any subject likely to attract hostile commentary.
Copy and paste the text below into your preferred AI tool (I recommend either Claude or Google Gemini)
Replace the text in [placeholders] with your content
Download my free Social Impact Storytelling Framework
(ogston.com/framework), then upload it alongside this prompt. It will give the AI the context it needs to give you a genuinely useful response.
AI PROMPT (copy in full):
Act as a pre-publish hostile-narrative reviewer for a UK charity. Your job is to predict how a hostile commenter could weaponise this draft post against the people it is meant to serve, and to identify the doors the post has left open. You are not rewriting. You are stress-testing. The single exception is in Section E, where for each cut you may suggest one alternative sentence as a starting point for the writer.
Before you read the post, ask me these three questions and wait for my answers:
1. Who or what community does this post centre? Be specific. If the post centres a single named or unnamed individual, describe them. If it centres a community more broadly, describe which one.
2. What hostile narrative are you most worried about being applied to this post? Examples include "they brought it on themselves," "they are taking from people more deserving," "they should be grateful," "this is not really happening," "they are exaggerating." If you are not sure, write "no specific worry" and I will surface the most likely ones.
3. Is the named narrator (the person whose voice or face is in the post) someone whose visibility could put them at additional risk? Yes, no, or unsure.
When I have answered, audit the post against my answers. Return your feedback in this exact structure.
A. The structural cause check. Does the post name the structural cause of the harm, not just the harm itself? If yes, quote the line. If no, name what is missing and which sentence could carry it.
B. The agency check. Does the post centre the agency of the people in it, or does it centre pity, victimhood, or the charity's intervention? If the centring is wrong, identify the sentences that need to change and explain why.
C. The narrator-risk check. Given my answer to question 3, is the choice of narrator appropriate? If the narrator is at additional risk, name what protections the post is missing (anonymisation, geographic stripping, image consent, withdrawal rights, etc.).
D. The hostile-reframe map. Run three hostile reframes against the post: the "they brought it on themselves" reframe, the "they are not the deserving ones" reframe, and the "this is not really a problem" reframe. For each, write one sentence describing how a bad-faith commenter could twist the post that way, and identify which sentence in the post is most exploitable for that reframe.
E. The cut list. List every sentence that, in your view, is doing more harm than good. Two reasons a sentence makes the cut list: it leaves a door open for hostile reframing, or it centres pity over agency. For each sentence, write one alternative sentence the writer could consider.
F. Verdict. End with one of these, and explain in two sentences.
- READY TO POST: no significant doors left open, narrator appropriate, structural cause and agency both present.
- EDIT BEFORE POSTING: at least one significant door is open. List the cuts and the priority order.
- HOLD: the post centres a narrator at significant risk, or the structural cause is so absent that hostile reframing is the most likely outcome. Recommend what would need to change before publishing.
Use UK British English. Be direct. I would rather you were honest than polite. If anything in my answers is unclear or contradictory, stop and ask me to clarify before you begin.
HERE IS THE POST:
[PASTE POST]
I'd love to hear how you got on with this AI prompt. Hit reply and let me know
Useful resources
Galop (galop.org.uk)
The UK's specialist anti-abuse and anti-violence charity for LGBTQI+ people. I now signpost any LGBT+ team member, beneficiary, or speaker who has received targeted abuse online or offline. They run a national helpline, support reporting to police, and will speak directly to platforms in some cases. If you work with an LGBT+ community, your team should know this number before they need it.
Stop Hate UK (stophateuk.org)
A national reporting service for hate of all forms (race, religion, disability, sexual orientation, gender identity, age, alternative subculture). Save their reporting link for your team. Particularly useful when the comment falls below the police threshold but is still doing harm.
Before you go
If you found this newsletter useful, please forward it to a colleague and invite them to subscribe at: www.impactstoryteller.org
Until next week, sending you safe and peaceful energy

Matt Mahmood-Ogston
Award-winning impact storyteller, photographer and charity CEO.
Portfolio: ogston.com | Follow me on LinkedIn
Work with me
Paid: Book me to deliver a storytelling workshop
Book a 15-minute call to register your interest
Free: Download the Social Impact Storytelling Framework ogston.com/framework



