Blog

Long copy, long yawns. Our self-test shows how shallow AI content really is.

Dennis Buchmann
June 6, 2023
5 mins

My grandmother used to say, “You get what you pay for.” The same goes for AI-generated longform. It costs almost nothing to produce, and too often it shows in the substance. Here’s why it still takes human intelligence to tackle the hard problems and deliver high-quality content.

Recently I tried Jasper, a generative AI platform for businesses that, in its own words, helps teams create on-brand content ten times faster wherever they work online. I find ChatGPT quite helpful for brainstorming, wording, synonyms, and first passes at short copy. Jasper, built on ChatGPT, claims it can produce strong longform in no time. If I wrote an article on the basics of good employer branding without AI, it would take me half a day to two days, depending on depth and expertise. Could Jasper do it in two minutes?

Well, the software produced a grammatically correct longform text that wasn’t factually wrong – but I found it mind-numbingly thin. I even sent it to a colleague who thought I had written it myself. Her reaction: “Well, Dennis… how should I put it… I thought you could do better. Our standards are a bit higher, aren’t they?”

To quote the summary of Jasper’s article: “Building a strong employer brand is not an overnight process. It involves understanding your target audience, developing messaging that resonates, showcasing your culture, and evaluating your reputation. But it’s worth it. A strong employer brand can help you attract the best talent, reduce turnover rates, and increase employee satisfaction. So, start investing in your employer branding efforts today, and watch your brand reputation and bottom line thrive.”

At first glance, it doesn’t sound wrong. It’s even a bit motivating. But lines like “Building a strong employer brand is not an overnight process.” signal a high blah factor – a platitude. If you shower, you get wet. That’s the level, and it’s exactly what we need to watch out for.

The thin result may have come from the data ChatGPT remixed: content people wrote about the basics of employer branding. Real specialist literature may not have been in the mix, or not in meaningful volume. Or it was my prompt: “I need a blogpost about the basics of employer branding. What are the most important things to consider when setting up and managing a successful employer brand?” I don’t know.

After many more attempts to produce solid longform with Jasper, it’s clear what an AI like ChatGPT can do, and what it can’t:

  • The algorithm can recombine data produced by human intelligence (MI) and calculate which words, in what order, are most likely to produce an optimal – or at least prompt-relevant – output.
  • The algorithm still lacks a conceptual grasp of what it’s computing. It doesn’t generate genuinely new ideas, and it can’t reason with understanding. So despite the polished form, one thing is missing: intent, the substance that moves a conversation forward.

Computer scientist and author Jaron Lanier wrote in The New Yorker: “There is no such thing as artificial intelligence. Instead, the most accurate way to understand what we are building today is as an innovative form of social collaboration. A program like OpenAI’s GPT-4, which can write sentences to order, is something like a version of Wikipedia that includes much more data, mashed together using statistics. Programs that create images to order are something like a version of online image search, but with a system for combining the pictures. In both cases, it’s people who have written the text and furnished the images. The new programs mash up work done by human minds.”

The sun doesn’t rise because the rooster crows; it’s the other way around.

Neural networks spot patterns better and faster than people, for example inferring disease from physiological measurements. They do not understand cause and effect, as Thomas Brandstetter of the Max Planck Society puts it. We still have to teach these systems that the sun does not rise because the rooster crows; it is the other way around.

For now, at least until causal models actually work (the Max Planck Society is working on them), Jasper and similar tools are fine for short marketing copy and for financial, sports, and other informational updates. An algorithm that produces text where editors need more fingers than brainpower is, admittedly, a welcome relief.

I don’t yet see brand stories grounded in values and strategy, clear core messages, thought-leadership pieces, or truly coherent copy across an entire website. New ideas, connections, theses, opening questions, and reasoned arguments remain – at least for now – the work of human intelligence.

One worry I have about Jasper: it could unleash a storm of hot air. People will churn out masses of thin copy and burn readers’ cognitive bandwith. We’ll see fewer strong ideas and more watered-down content. If that output feeds back into the training data, we risk a self-reinforcing cycle of thinness. The employer branding piece is, at least, content. It might impress a few unsuspecting readers. But I wouldn’t pay $468 a year for Jasper on quality alone.

Given the flood of messaging everywhere, the rule should be simple: less content, better content. At mc-quadrat, we still make it the human way – teams that collaborate, exchange perspectives, and know what quality means to real audiences. We put analysis, insight, ideas, and clear concepts behind every piece. We start with the why and combine experience with curiosity and a willingness to shift perspective. AI can’t match that yet.

Note:: We’ll likely see more labeling that distinguishes AI content from human-written content. The PastaGPT flap and Burda’s recipe booklet are early cases in point.