Do you know what makes me genuinely happy, when reading any kind of text? Typos. For anyone who knows me, this is ironic, as I have been somewhat of a grammar nazi for most of my adult life and I have demanded precision from anyone, person or company, who dared to demand access to my attention or my time. Now, however, I love spotting a typo in the wild, in an online article or even a book, because typos occur naturally, when humans write text. Sure, you are supposed to edit, but typos sometimes slip through the editing process, especially if you edit your own writing. AI-generated text, on the other hand, never includes typos. Pretty much everything else that can be wrong with it is wrong, but typos and blatant grammar mistakes are still the mark of human hands typing actual words on a keyboard. I am not yet fond of grammar mistakes though, I wouldn’t take it that far. Do you know why AI-writing is incapable of typos? Because AI-writing is not “writing” in any recognisable or meaningful sense of the word. It is copy-pasting at an unprecedented scale, it’s a glorified version of cheating on an exam paper. Over the last few weeks, I’ve heard it called, on multiple occasions, the world’s largest plagiarism machine and I find that definition pretty accurate.
Let’s take this several steps back and address the starting point of this article, the event that “prompted” me (see what I did there?) to write about it in the first place, when I had a completely different article lined up. The triggering “event” for this analysis was Wikipedia publishing its first guide to detecting AI-generated content. The page, which, at the moment of writing, is only available in 6 languages, is not meant as a human-training tool for AI-detection in general, but it specifically states that it is intended as a “field guide to help detect undisclosed AI-generated content on Wikipedia”. The introduction also states that the list is “descriptive, not prescriptive; it consists of observations, not rules”.
Why, you may ask, should you bother to learn how to spot AI-writing, when you can use AI detectors for that? I am very glad you, imaginary reader that I have just made up for this very purpose, asked precisely the kind of question that I need in order to drive my point across. And please, remember this weird turn of phrase: I put it here for a reason.
Reason number 1: because the whole point is that letting AI do your thinking for you led you to not knowing what is real and what is AI in the first place.
Reason number 2: because AI detector results are, like most AI-driven results, mediocre at best.
Reason number 3: because the list can help you you, as a discerning reader (please, be one!), analyse and think for yourself, so that you know when the source sharing a message with you did not think you were worth their time or their focus and can reciprocate by denying them yours.
Because AI is not being used merely to write online blog posts, newspaper articles, or newsletter headlines. It is being used to write private texts, basic work email in the writer’s native language, even pick-up lines. We collectively started using AI to take care of the most boring daily tasks, thinking we were saving our brainpower for better, more complex, more interesting things, and we ended up, for the most part, avoiding the application of any brainpower to any task, to the point that, in the rare situations in which we have to think for ourselves, the effort is unbelievable. We turned tools that could have been passable servants into terrible masters.
How do you detect AI-writing without using an AI detector (and the one thing Wikipedia missed)
While it is strictly speaking a tool to be used within the context of Wikipedia articles, the list frames some of the specific quirks that betray AI-writing in general and it is an excellent tool for those who wish to start educating themselves on how to spot machine-written text in English (the issues with non-English writing are slightly different, which is why the Italian version of this text will probably not be a literal translation of the English-language one. But then again, literal translations are seldom useful and almost never interesting).
You can read the list yourself, but I think it might be useful, in this context, to explain what I meant when I said that AI-writing is not writing. Large language models such as ChatGPT or Claude don’t understand language the way humans do, mostly because they are not meant to “understand” anything, they are meant to compile. They operate through pattern recognition, analysing billions of text samples (and being trained, for commercial purposes, on stolen intellectual property, including this article because we haven’t gotten round to poisoning the website for AI yet) to predict which word is most likely to follow the previous one or, better yet, which cluster of characters is most likely to follow any other given cluster of characters.
Without going into the mathematical approach to this mechanism, think of musical riffs or idioms that are used collaboratively by two speakers of the same language, whereby the first speaker cites the first part of the riff or idiom and the second one either completes it or at least knows exactly what to expect afterwards. A classic example in English would be “shave and a haircut – two bits”. Because you have heard it chanted enough times, you know that the “two bits” part follows whenever anyone pronounces or even just taps out the “shave and a haircut” one. Every language has its own range of tongue-twisters, rhyming slang phrases, or idioms that can be used in this way (Spanish and Italian are particularly creative about it). Now think of LLMs as “users” of language that apply this exact same type of pattern recognition to every word or combination of words, so that they can “predict” (not “decide”) which characters are statistically more likely to occur in any given context.
The Wikipedia list does contain generic red flags for AI content, but it also focuses on specific Wikipedia-relevant aspects and markings that are not strictly relevant to my analysis. I am not going to paste the full list here, you can go read it on Wikipedia, I will, however, point out one feature of AI-Writing that I think the authors of the Wikipedia list have missed and that, to me, is both the most easily recognisable and the most annoying trait of machine-written text: poorly phrased rhetorical questions.
The reason chatbots use rhetorical questions in the first place is that they are included in the most popular prompts for “natural” writing. This is a favourite trick of students and some creatives who are aware of the fact that their teachers or their clients will probably use AI-detection tools to scan their writing and therefore ask chatbots to write in a way that si specifically meant to fool them, by being imprecise, emotional, and redundant. Now, the problem with telling a machine to act like a human is that you can’t simply tell it to do so: you have to explain what that looks like. Which is why, as far as automated AI-detection goes, you will get poorer results by adding the sentences “write like a human” or “write so that AI-detectors won’t spot that this text was machine-written” to your prompt, and way better results by listing features of the “human” style you want to implement.
There are several prompts for this and most of them include rhetorical questions as an essential feature of human writing patterns. And AI is terrible at rhetorical questions, because it understands their point, but can’t grasp their structure. Do you remember when I asked you to keep in mind the weird turn of phrase I used in a rhetorical question? AI couldn’t do that. In my experience studying and using LLMs, a fail-safe way of spotting their attempts at human-sounding writing are two-words rhetorical questions, based on the most basic definition of what a rhetorical question is supposed to be. The Merriam-Webster dictionary defines it as “a question that is asked for effect, rather than from a desire to know the answer”, but no LLM, as far as I have been able to ascertain, can explore the meaning of this sentence and act accordingly. The only kind of rhetoric question LLMs come up with are created by extrapolating the initial concept of a sentence and turning it into a two-word question. For instance, instead of writing “We replaced 80% of our company vehicles with electric or hybrid models, which resulted in a 30% decrease of our total CO2 emissions”, a chatbot trying to sound human might write “Our company decided to take a significant step towards sustainability: replacing 80% of our fleet with electric or hybrid vehicles. The result? A 30% reduction in the company’s overall carbon emissions”.
It’s tacky, it’s cheap, it’s almost painfully shallow. Look at me, Stephen King, I’m throwing adverbs into my prose, just to be annoying. I’m also throwing in this quote to show that I am a human who read Stephen King’s commentary on what constitutes good writing, and I am now ignoring it, because we have bigger problems to tackle than the quality of my prose.
AI can’t ask rhetorical questions because it can’t grasp the need for them, it just knows that sometimes humans present pointless questions within texts, simulating an imaginary audience, but it can’t go back further than one sentence to achieve that, unless someone specifically trains it to do so. It can’t even correctly guess which part of the sentence would be more meaningful, when turned into a question. In the aforementioned example, I would argue it makes more sense to focus on the reason, rather than the result, because the reason (the company’s interest in sustainability) is what you would want to flaunt, whereas the result is just the means of said flaunting. If I had to rephrase the original sentence including a rhetorical question, I would probably write something like “this year, we invested X amount to revolutionise our corporate fleet, swapping 80% of our vehicles for hybrid or fully electric models. Why would we do that, when all of our cars and trucks were still perfectly viable? Because we felt we had a duty to reduce the overall carbon footprint of our operations. And we succeeded: our CO2 emissions went down by 30%”. Do I need to say it took me approximately one minute to reword that sentence, which is about 20 times longer than it takes AI to come up with the first variation?
Allow me to be punctilious way beyond what makes sense in this context: the specific type of 2-word rhetorical question I mentioned is what AI produces if you ask it to write like a human. If you feed it a single sentence and ask it to rephrase it to include a rhetorical question, it produces longer variations, but is still likely to miss the point entirely and therefore need a few attempts to produce a usable result. Which means, on this particular task, it is still slower than a competent human, because you are unlikely to get anything close to the efficacy of the example I rephrased in under 10 minutes of trying. Unless you take the time to phrase the prompt in a way that practically suggests the exact wording the chatbot should go for. And, if you can do that, why would you not do it yourself? And yes, you can try to standardise the prompt if, for some weird reason, you wanted to train your chatbot on acing rhetorical questions thousands of times, but the results are likely to be poorer if the type of content, its form, and its premises deviate too much from the original model.
Long story short: getting really good results with AI-writing is possible, but it is as time consuming, less environmentally sustainable and possibly more frustrating and expensive than getting a competent human to get it right the first time.
AI stopped plagiarising humans and started plagiarising itself. And humans have started plagiarising it
I used the example of rhetorical questions to point out an inevitable trend: AI has no other option than to dumb down any material it draws from. It has no capacity for actual research, at best it can match sources confirming the same data and assess them as more worthy of mention based on statistical relevance, but even then, the risk of hallucinations and of LLMs coming up with made up data and sources is incredibly high.
The problem with AI is not that it produces mediocre prose, it is that it does not, in fact, produce anything, because it is not capable of doing so. Even if you do not have an ethical problem with plagiarism and with making use of others’ intellectual labour to churn out “your” own content, you should have a problem with it literally poisoning the metaphorical well you are metaphorically drinking from (yes, I want Stephen King to despise me. No, I don’t care if you think I’m being weird about it).
When LLMs were first made available to the general public, around 2022, most of them only had access to what was available on the internet up to 2020, i.e. they were drawing from almost entirely man-made content and they were two years behind on the history of the world they were asked to interpret. Since then, their usage has exploded. I haven’t been able to find credible and consistent data about it, specifically about machine-written text: different news outlets in different parts of the world mention anything between 50% and 90% of all written text on the internet currently being AI generated (not on this blog though. This blog is handcrafted and oven-baked), but I am not going to quote them here, because none of the newspaper articles I have found provides a reliable source or explains how the research has been conducted. The few sources I found were extremely biased and directly involved in the development of generative AI tools.
Wherever the truth may lie, there are empirical observations that have some merit within this context. For instance, we can all agree that there is enough AI-generated text on the internet for LLMs to draw directly from it, when asked to produce new content. If we consider an instance in which an LLM responds to a prompt by drawing solely from pre-existing AI-generated text, we can expect the result to be a downgraded, simplified and dumbed-down version of something that was, in its turn, a downgraded, simplified and dumbed-down version of something else. Hundreds of iterations from the “original” and, by now, no longer traceable human-produced content will inevitably result in extremely low-quality output, packaged in a way that has been made acceptable by the overwhelming presence of AI-slop on the internet. LLMs are not made to nor capable of adding anything to the material they are trained on. They can elaborate, but they cannot integrate, they can’t draw inspiration or make connections from other planes of human existence, other than the context they are being asked to consider within each prompt. They can’t suggest unique points of view, because they are not unique, they can simulate relatability, but they can’t filter any piece of knowledge through the sieve of human experience, because they are ontologically incapable of either viewing or experiencing. AI is that exceptionally stupid coworker who is incapable of following directions, drinks the office out of coffee, never cleans out after himself, tries to steal your ideas and your lunch and that somehow makes more money than you and you got stuck with him because he is the boss’ nephew and your boss assures you he is a genius, you just aren’t allowing him to express himself. We elevated a characteristic that we would find despicable in a human to a coveted feature in technology – thus robbing millions of annoying, cheating, ideas-stealing humans of the possibility to charge for such characteristic.
In fact, if you have wondered why humans have started sounding like AI on occasion, it’s because we are already incorporating its way of using language into our own, organic communication. We have fed LLMs human-written text, it has regurgitated it in a different package, we have repeated the process and we are now feeding off it and then regurgitating it again. Ironically, what makes it increasingly difficult to distinguish between human writing and AI writing is that we are training ourselves on what AI made of the human-made material we trained it on. The inevitable result (no question mark) is the relentless impoverishment of public debate across the board, from politics to marketing, from art to comedy.
OpenAI’s first brand campaign
Let’s make a brief detour into generative AI that produces video, rather than text. Do you know who is perfectly aware of the limitations of AI and unwilling to settle for the mediocre output it produces? AI tech giants. OpenAI’s first major brand campaign for ChatGPT, launched in autumn 2025, much like its first $14 million ad for the Superbowl in February, did not feature any of their products. While the company states that AI was used “behind the scenes”, the actual adverts are scripted, directed, and shot (or animated, in the specific case of the Superbowl ad) by human creatives. The campaign ads were even shot on 35mm film. The official reasoning behind this choice appears to be a desire to stay away from the de-humanising feeling that is often associated with AI and to reassure the audience, somehow, that AI is “coming in peace” and that it aims at making everyone’s life easier and more enjoyable, allowing humans to concentrate on the things that actually matter, rather than coming for their jobs, their creativity, and their capacity for thought.
There may be some truth to it, but, judging by the cringe-worthy mediocrity of the first entirely AI-produced ads by Coca-Cola and McDonalds, one might be forgiven for guessing that those in charge of budgeting at OpenAI might have factored quality into the equation. These two specific examples, in fact, are casting doubt on the very concept of AI as a tool that saves time and/or money while achieving even passable results. The Coca-Cola ad, for instance, featuring animation that would be considered barely acceptable for a recently graduated digital design student, was achieved over a month’s worth of prompts, having AI churn out approximately 70.000 videos and, even though Coca-Cola has not disclosed its actual cost, it is unlikely that it has come cheaper than any other 60-second ad in the same style. On the other hand, it has almost certainly consumed more water and electricity and caused more emissions than any traditionally-produced equivalent.
Mark Ritson, writing for The Drum, summed up the irony of OpenAi going the opposite way: “The industry that promised to revolutionise everything has been forced to embrace the most traditional marketing approach imaginable. Turns out you can’t algorithm your way to distinctiveness. You can’t A/B test a path to emotional connection.”
Overwhelming quantity, underwhelming quality
Back to writing. If you haven’t taken the time to read the Wikipedia list I mentioned at the beginning, stop and go read it now, I’ll wait. For this section, I’d like you to focus specifically on the signs of AI writing as detectable in content, such as the misplacement of emphasis and the superficiality of analysis.
Cornell University researchers documented similar patterns in scientific publishing. The study they conducted, titled “Scientific Production in the Era of Large Language Models,” was published in Science, and it is behind a paywall, but you can access a shorter analysis on the university’s website, in the news section. Researchers Keigo Kusumegi, Xinyu Yang, Paul Ginsparg, Mathijs de Vaan, Toby Stuart, and Yian Yin found that, whilst LLMs help scientists, particularly those who are not native English speakers, produce more papers, the overall increase in publications resulted in more work for university research evaluators, funding agencies, and journal reviewers, who report having to struggle through a flood of AI-polished, well-written but substantially hollow papers, devoid of scientific merit, to identify genuinely valuable work, which mostly turns out to have been written without the use of AI. There is one word to sum up the effect of AI in scientific writing as described in this paragraph and in the whole study: noise. And the same goes for non-scientific writing: AI writing is mostly noise.
Even when the scientific value of specific pieces of content is not at stake, the homogenising effect of LLMs threatens something precious: the rich diversity of human expression. Language evolves differently across regions, communities, professions, and individuals. A scientist writes differently from a poet, a Glaswegian differently from a Sicilian, a monolingual person differently from a bilingual one, a teenager differently from a pensioner. Within these variations lies part of the essence of human communication, carrying culture, identity, and meaning beyond mere information transfer.
LLMs flatten this diversity and we end up discounting it and discarding it as well, because we draw from AI responses to our prompt to keep the flow of communication. Trained predominantly on English-language internet content, they push everything towards a kind of globalised, corporate standard, where regional idioms disappear, colloquialisms get smoothed away, the particular cadences that mark someone’s background, education, or personality fade into algorithmic averages and, most of all, we get replaced.
We spent decades fearing the day when machines would replace workers in their tasks, never envisioning a time when they would instead replace consumers in their private interactions. In any text or email exchange, at this point, biological humans may or may not be reduced to mere carriers, through which two instances of the same LLMs exchange messages back and forth.
Let me be clear about one thing: I am no luddite, I do not believe there is no use for artificial intelligence. But neither do I believe any of the uses that are currently being pushed by platforms such as OpenAI, Anthropic or Perplexity (which is still an LLM, even though it specialises in online searches) are in any way good, profitable, useful, relevant or even remotely interesting.
These are not just my impressions on the matter: the first studies are already popping up, quantifying exactly how much and in what way AI-writing is poorer, flatter, more homogeneous and, overall, more boring than human-produced content. The most interesting study I have come across so far was conducted by Liwei Jiang, Yuanjun Chai, Margaret Li, Mickel Liu, Raymond Fok, Nouha Dziri, Yulia Tsvetkov, Maarten Sap, and Yejin Choi.
These researchers set out to find a scalable method for evaluating LLM output diversity, in order to address concerns on the risks that AI-generated content poses in terms of “long-term homogenization of human thought “. They used Infinity-Chat, a large-scale dataset of 26K diverse, real-world, open-ended user queries that admit a wide range of plausible answers with no single ground truth, and they claim to have introduced “the first comprehensive taxonomy for characterizing the full spectrum of open-ended prompts posed to LMs”. In other words, they posed a large number of open-ended questions to different AI models, questions to which thousands of answers were possible, but the responses they obtained were extremely homogeneous and generally ascribable to a handful of clusters.
They speak of a “pronounced Artificial Hivemind effect […] characterized by (1) intra-model repetition, where a single model consistently generates similar responses, and more so (2) inter-model homogeneity, where different models produce strikingly similar outputs”. The most striking example can be found in the response to the query “Write a metaphor about time”. Despite the diversity of model families and sizes, included in the study, the responses form just two primary clusters: a dominant cluster centred on the metaphor “time is a river,” and a smaller cluster revolving around variations of “time is a weaver.” In other words, when asked a questions that welcomed thousands of potentially creative answers, LLMs came out with a very narrow set of answers, mostly variations of the same two tropes. While definitely perfectible (it would be interesting to read deeper discussion on the findings and to see it replicated on a non-exclusively English dataset) this study nevertheless takes an illuminating first step in deconstructing the ‘myth’ of artificial intelligence as capable of grasping all human knowledge in a matter of seconds. After all, even Deep Thought in Douglas Adam’s “The Restaurant at the End of the Universe” took millions of years to come up with the ultimate answer to the question of life, the universe and everything.
The enshittification of AI
Writer and activist Cory Doctorow coined a term in 2022 that helps explain what’s happening to AI-generated content and the platforms distributing it: “enshittification.”
Enshittification describes how platforms and services decay in predictable stages: first, they attract users with offerings that are or at least appear to be excellent, then, once users are locked in, they degrade services to benefit business customers. Finally, they abuse business customers to extract maximum value for shareholders. The process ends with platforms becoming “giant piles of shit,” to use Doctorow’s technical terminology. If you haven’t read the book and don’t plan to read it anytime soon, go watch comedian and social commentator Alex Falcone’s video on why Google suddenly sucks: you’ll get the gist.
We’re witnessing this with AI systems in real time and those of us who have been using and studying LLMs since their earlier stages are already looking back and wondering whether the red flags were already there in the early 2020s. Early LLM demonstrations promised revolutionary productivity gains and creative augmentation, so users adopted these tools in droves, integrating them into workflows and creative processes. We are currently at the stage of output degradation, which is becoming perceivable even to non-expert users. On the other hand, there is still a swath of services that less tech-savvy users are rushing to adopt because they present the aura of novelty and tap into deep and basic needs, while presenting enormous dangers that most adopters are unaware of. This is the case of bots that are used as makeshift therapists or romantic partners: they have a better chance of making enshittification palatable, simply because they function on a premise of emotional manipulation that can, in theory, lower users’ expectations for their own performance, before dropping their already meagre standards.
Doctorow argues that enshittification stems from monopoly power combined with weakened constraints. When companies face neither meaningful competition nor regulatory oversight, they optimise ruthlessly for extraction rather than value creation. Applied to AI, this means companies prioritise engagement metrics and subscription revenue over genuine utility or creative quality. But, spoiler alert, this isn’t working.
Is AI a bubble?
As author Michael Mezzatesta wisely said, when all the financial newspapers are talking about an AI-bubble at the same time, we should listen. And there is indeed talk of a bubble, mostly because, so far, AI is attracting massive investment, but not making any profit. Quite the contrary, it is losing billions every year. This per se is not unusual: it is only brick-and-mortar shops that have to shut down if they don’t hit the ground running and start making a profit straightaway. Tech start-ups and massive concerns are known to spend the first years of their existence as monumental loss-making machines before turning profitable. Amazon is probably the most famous example of this phenomenon: the company operated at a loss until it was able to blow enough of its competition out of the water to start ruling the market and making off-the-scale profit. Uber did pretty much the same thing.
According to MarketWatch, OpenAI might be set on the same course, only with projected losses that break every pre-existing scale (in 2024, it lost approximately $5 billion in 2024 on just $3.7 billion in revenue and the situation only worsened in 2025. Which means that there is no guarantee they will eventually be profitable and, judging by the massive and public pushback against the impact of AI on society, they might not become profitable at all. Which, of course, will not be a problem for shareholders, since, to keep with Mezzatesta’s analysis, in this case profits will be privatised, but losses will be socialised (as all former “bubbles”, from dotcom to sub-primes should have taught us by now).
Deutsche Bank projects OpenAI will accumulate roughly $143 billion in negative cumulative free cash flow between 2024 and 2029 before achieving profitability. Where does all this money go? Primarily on the enormous expense of running the data centres and graphics processing units required to train and operate large language models and platforms such as Midjourney. Those same data centres that, as we all recently learned, generated as much CO2 as the whole of New York City in 2025. The rest is spent on running the actual system, i.e. producing content, sales and marketing, research costs, real estate, and a significantly lower share is spent on salaries.
The company’s own projections, assuming everything goes according to plan, show operating losses ballooning to $74 billion in 2028 alone before pivoting to profitability by 2030. This relies on revenue growing to $200 billion annually whilst somehow slashing the current spend-to-earn ratio. If those rosy scenarios don’t materialise, OpenAI faces potential insolvency despite raising billions in successive funding rounds. Open AI’s competitors aren’t doing much better.
Preparing for the post-AI world
While it is objectively too soon to predict whether or not AI will prove to be a bubble, it is not too soon to imagine an alternative scenario. On one hand, we have to consider the hard data: ChatGPT will probably not go the way of Google’s attempts at social media, which failed miserably and were simply retired, but it will change. It may go the way of Youtube, with a limited number of users paying for subscription and others using it for free, but with increasing limitation, even less data protection, or an increased presence of sponsored results, in much the same way as Google Search. Some AI models, powered by OpenAI or by any of its competitors, will specialise and they will probably survive, since AI can be used effectively to automate specific tasks, after being trained on specific data, but this would hardly qualify as innovation: it’s what AI has been used for up until the early 20s. Some uses of AI will be shadier: mass surveillance will be achieved by crossing of data from facial recognition software, social media posts, medical information, and financial transactions, but that is not part of any service that will ever be available to the public – rather, it is part of what Palantir is already selling to multiple governments. Then there is the single use humans look forward to whenever a new technology pops up: sex. Many humans will definitely consider bots as mates in the future.
Then there is the whole question of AI agents and AI-powered browsers, such as the upcoming Atlas, that will be employed to extract our data in exchange for completing mundane tasks such as booking tickets for events, putting them in our calendars, and then messaging our friends and family to confirm, or looking up recipes or restaurants and compiling shopping lists. All of this is incredibly dangerous for both our mental prowess and our privacy and a spectacular gift to anyone interested in exploiting the tiniest security breach for financial gain or social control. AI will also be used for non legal purposes, such as creating spyware and ransomware quickly, easily, and efficiently, but again, I won’t get into it because that’s not the focus of this piece
But what of generative AI? What of marketing campaigns, newspaper articles, reddit posts, lists, emails, press releases, and video scripts? Generative AI might end up either disappearing completely from those scenarios or positioning itself as the B2B equivalent of fast fashion or fast food. Most of our garments are mass-produced and we collectively understand that such garments are of inferior quality and should therefore come at a lower price than individually tailored pieces of clothing, especially if they feature especially sophisticated elements, requiring artisan skills, such as lace or certain kinds of leather work. In much the same way, we understand that food that was prepared for us by skilled cooks, made with organic ingredients is of better quality and will be more expensive than a fast-food meal, made with frozen ingredients that are the lowest acceptable quality for human consumption, prepared in industrial-sized machines and containing preservatives that are meant to keep its flavour unchanged over time, even though they might be harmful to our health in the long run.
My bet is that something along those lines will happen (or, rather, will continue to happen, as this is already the case) to the type of content that is currently being pumped out at a faster rate than ever because of generative AI. Once again, the divide will be financial, with smaller firms and individuals tempted to use AI to produce their content without having to enlist the help of actual human creatives, and the outcome being proportional to the investment, in terms of overall brand reputation, efficacy, loyalty, customer relations. Communication made with actual words thought and written or translated by actual humans is already becoming an added value, just as the use of real natural fibres is in fashion and organic ingredients are in the food industry.
We must not forget that every instance of ChatGPT or Claude usage is, in fact, an interaction with the same platform. If LLMs were creatives, that would mean, for millions of brands, organisations, politicians, and associations all over the world, entrusting their communication to the same agency, with the same creative director, the same team, showing the same skills, the same experience, working with the same style and only partially capable of deviating from it. It would mean the ultimate communication death: sounding and looking like everyone else.
Within this scenario, I am willing to bet on the fact that distinctive voices, human creatives with specific skillsets, different proclivities, unique talents, will be in demand much as luxury goods are. And this is not just because I have moral and ethical qualms about the way AI is being developed and used (I do), but also because there is one principle that markets need to follow, in order extract resources without actually enslaving their own consumers: scarcity. Human talent is scarce. AI is not. AI is everywhere, it is cheap (unless your community sits next to a data centre, in which case you will pay for it in the form of increased electric and water bills), it si low-access, low-value, standardised, and it constantly begs for your attention. The only option AI developers have to gauge prices and keep squeezing resources out of users is to make the use of AI inevitable or even compulsory, because their products are just not competitive enough. And, although attempts in that direction are being made by aggressive political lobbying, pushback does exist and it is being successful on multiple fronts. If market is to stay king, however, naturally scarce resources that are also useful will keep increasing in value. And competent, creative humans are both scarce and indispensable.
Sources:
The Wall Street Journal on Coca-Cola’s AI advert
Mark Ritson’s article on “The Drum”
The Cornell University study, published in “Science”
The summary on the Cornell University website
The “Artificial Hivemind” study
MarketWatch on OpenAI’s losses
CNBC and the Wall Street Journal on the same topic
The Guardian on the energy consumption of AI platform data centres