Darrell Heaps is founder & chief strategy officer of Q4, helping companies use AI to improve investor relations workflows and results.
Picture this: You find yourself with a 2.5-hour, meeting-free block in your day. (Oh, the blessed rarity!) A year or two ago, you might have used that time to hunker down—working on a couple of slides for a high-stakes presentation.
Today? You knock out the whole deck. And a draft agenda. And the follow-up email.
You haven’t suddenly become that much more prolific—you’re using large language models (LLMs) to your advantage. ChatGPT. Copilot. Perplexity. Pick your flavor, and pat yourself on the back. Just don’t get too comfortable.
The Rise Of ‘AI Slop’
There’s no doubt that modern LLMs have supercharged productivity. With the right prompt, output comes faster and easier than ever. And mostly, that’s a good thing.
But with speed comes an unfortunate side effect: a rise in content that creates a rise in “AI slop.”
Slop is content that sometimes looks good at first glance … until you dig in. Think surface-level chatbot replies, bloated reports and email copy that sounds polished but falls apart under scrutiny. Or yet another earnest LinkedIn post comparing on-sale oranges to someone’s corporate career path. (Don’t get me started.)
Too few people are asking, “Do we need this?” Because when it’s that easy to spin up a 20-page report, rather than a tight four-pager, or bust out a slick document no one asked for, the default becomes “Why not?!” rather than “Should we?”
When More Equals Mess: Beware The Consequences
Besides being mind-numbingly boring, slop has more serious consequences, both within and outside the business.
AI was supposed to speed things up, right?! Internally (and ironically), it can slow teams down. People get buried in shoddy content they have to fact-check, rewrite or realign. Everyone becomes an AI quality guardian.
When slop muddies priorities and clutters workflows, it becomes harder to move things forward. And just because it’s possible to generate 10 things in the time it used to take to create one, that doesn’t mean the extra nine are useful.
Externally, the stakes are high too. That’s because when AI-generated content is 90% right and 10% wrong, it’s 100% damaging. Why? That 10% isn’t about formatting issues or typos. It’s where material inaccuracies lie and where trust starts to unravel. Slop sows confusion, erodes credibility and sends the message that your business can’t be relied on for facts.
That should concern any business leader. In my work, partnering with finance and investor relations (IR) professionals, the consequences are especially sobering. In IR, trust is nonnegotiable, and nuance is everything. One oversimplified earnings summary or tone-deaf explanation in a shareholder letter can shift the narrative and rattle investor confidence.
When Will The Slop Stop?
So is it “slop ’til we drop”? Let’s hope not, and let’s not wait to course-correct.
Unfortunately, this problem will likely get worse before it gets better. The tools are getting faster, and content volume is increasing, as the pressure to produce is rising.
But this trend is bound to hit a breaking point.
Internally, employees will burn out on reviewing meaningless output. Externally, customers and stakeholders will lose patience with content that sounds smart but says nothing. We’ll see clear demand for better experiences and outcomes, and businesses will need to deliver.
Cleaning Up The Slop
The good news? You can take steps now to stem the slop. Instead of making “more, more, more!” your mantra, consider the following:
Design for outcomes, not output.
Before you hit “generate,” clarify the goal. Who do you want to reach and why? Are you informing, persuading or prompting action? Make that the bar, not the word count.
Know your audience.
Think about what’s meaningful to them and what they’d want to consume. Shorter may be better. Real examples, tied to their persona or industry, trump fluffy metaphors. And put yourself in the reader’s shoes: Would you read this?
Use AI that speaks your language.
As I know from IR and finance, domain-specific models are best-suited to understand nuance, grasp the context behind your data and reduce the risk of slop. They deliver output that aligns to your goals and drives better results.
Don’t fake it ’til you make it.
AI can sound convincing, but if its strategy and content recommendations leave you out of your depth, don’t pass them along blindly. You may be called out and find yourself on unsure footing. Instead, learn and understand first, then move forward if it makes sense.
Listen to what’s working.
Take advantage of analytics to track what resonates, what gets ignored and where interest trails off. Let performance shape your prompts.
Keep humans in the loop.
Don’t forgo a gut check on the output. Are the facts correct? Does the tone feel right? Is the content clear, appropriate and trustworthy?
It’s Not The AI; It’s The Approach
For the record, I’m a huge proponent of AI. AI in content, AI in product innovation, AI in customer experience, you name it. Today, we’re just scratching the surface of what’s possible, and it’s really exciting to think about what’s next.
There’s a ton to love about AI—but only when we use it right. Using AI to amp up the drivel? That’s not a use we should be proud of.
We’ve heard it before, and it still rings true: Less is more. When we rein in the noise and focus on clarity, we don’t just improve content. We improve decisions. And in business, that’s where results come from.
Forbes Finance Council is an invitation-only organization for executives in successful accounting, financial planning and wealth management firms. Do I qualify?
