A TECH site so SAVVY WE CAN'T EVEN GET CAPS LOCK TURNED OFF HERE... we're a work in progress for a few weeks. judge us in october! be kind until then.
Does the PR industry have a cheek when it comes to transparency with AI?
We hear a lot about the need for transparency in PR, but isn't that a bit hypocritical?
PR AI OPINION
Craig McGill
9/23/20254 min read


Why do communicators and public relations experts need to transparent about use of AI?
It’s a frequent cry: “To ensure the trust of the public, people need to know when the material they are seeing has been manipulated.” To which I have to ask the question: since when?
I have a lot of sympathy and understanding for that viewpoint. Transparency matters, especially as AI reshapes communication. But before we single out AI for scrutiny, let’s pause and consider: when did you last see any of these transparent measures in press releases or media materials?
A comment in a press release by a CEO or other senior member of staff that acknowledged the CEO never actually said the words and only ever saw them when they signed off on the draft? (See also: ghostwritten blogs, public speeches, and social media posts.)
An acknowledgement that spellchecks or other word-enhancing tools had been used in the creation of a document?
An acknowledgement that a photo - either of a product or a person - had been Photoshopped or otherwise touched up to look more attractive?
A disclaimer pointing out that the sound you were listening to had been filtered, edited, or otherwise changed to sound better?
And yet, if we take transparency seriously, we should push this line of questioning further. If we really want to see disclaimers everywhere, then shouldn’t the following also be made clear?
A note at the bottom of every press release reminding the reader the material was shaped to present the company in the most favourable light, and that inconvenient facts may have been left out? Omission is a form of manipulation.
An acknowledgement that many of the statistics or “independent” surveys quoted in PR materials were actually commissioned - and often framed - by the very company promoting them? Transparency would mean making crystal clear not just the numbers, but who paid for them and why.
A disclosure that quotes attributed to “happy customers” were selected from a much wider pool, with any negative or neutral comments quietly discarded - sometimes even from the same sentence. If AI-generated testimonials raise alarm bells, should we not also interrogate the selective curation that already dominates brand storytelling?
And what about audiences at shows like the BBC's Question Time? Sites like Guido Fawkes' Order Order have never been shy about pointing out if an audience member has political leanings. Should the broadcaster be doing that up front instead?
Talking of audience...
Of course, one could argue that the public already knows all this. People are savvy enough to realise that CEOs rarely type their own tweets, that photos in brochures have been enhanced, and that sometimes not even bylines in a newspaper are real (as any fan of Pat Roller from the Daily Record will recall).
Having said all of this, if audiences are already cynical about the artifice, does adding a footnote about AI involvement change anything? Do people genuinely want total transparency - or do they simply want to feel reassured that they’re not being blatantly lied to? Is it honesty they want or transparency - and is there a difference?
Transparency v practicality
Another question is practicality. If every image, quote, or statistic required a disclaimer, how long before we stop reading them entirely? Transparency could quickly become noise: endless asterisks, footnotes, and warnings that are ignored in the same way people scroll past cookie notices on websites and terms and conditions when you activate a new device.
Perhaps the real challenge is not just demanding transparency, but making it meaningful and digestible.
And then there's the black hat problem
And even if industry bodies did set rigorous guidelines for AI transparency, there’s an uncomfortable truth: those operating in the shadows - “black hat PR” agencies, political disruptors, or anyone creating deepfakes for destabilisation - are not going to play by the rules. They won’t bother with labels, disclaimers, or disclosure. So the audience most at risk of being deceived is also the least likely to be protected by any transparency framework.
Crying for consistency
What AI is really forcing us to confront is an inconsistency. We are suddenly demanding that algorithms and large language models declare themselves at every turn, while continuing to accept—perhaps even expect—the quiet manipulations of PR craft. Maybe the real issue is not whether AI is transparent enough, but whether the entire communications industry needs to revisit its norms.
Perhaps, then, the real solution isn’t to label everything but to establish thresholds of public interest. Some manipulations are harmless polish; others - especially in politics, healthcare, finance, or safety-critical domains - carry significant consequences. But if that’s the case, who gets to decide where the line is drawn? Regulators? Industry bodies like the CIPR or PRCA? Journalists? Companies themselves?
So what to do?
AI may be the spark for today’s transparency debates, but the flames reach much further back. Unless we are prepared to ask hard questions not just of the machines, but also of the human practices we’ve long taken for granted, our conversations about transparency will risk becoming performative. The choice is whether we want blanket disclosure that nobody reads, or meaningful standards rooted in public interest. And that choice won’t be made by technology - it will be made by us.
For what it's worth, I can see why we need the disclaimers and disclosures but I don't know if we need them all the time or if the posts that would need them the most (mostly political I suspect) would carry them - even if legally mandated to do so...