Automated journalism can't replace the human form but only if reporters, writers and editors are able to eschew the baggage that AI tends to carry along with it: Bias, the fake and the inevitable cliché
Martin Amis, the British author who recently passed, is best remembered for works like London Fields and The Rachel Papers, but it’s The War Against Cliché that comes to mind in the era of artificial intelligence (AI), automated journalism and robot-written stories. The War Against Cliché is an anthology of Amis’ essays and book reviews, and the title is from an essay on James Joyce’s Ulysses, which Amis reckoned was Joyce’s “campaign against cliché”. “All writing is a campaign against cliché. Not just cliches of the pen but cliches of the mind and cliches of the heart,” wrote Amis. In television interviews, he would go on to expound on this in his own inimitable way. “Whenever you write ‘the heat was stifling’ or she ‘rummaged in her handbag,’ this is dead freight….these are dead words, herd words, and cliches is herd writing, herd thinking, and herd feeling. A writer needs to look for weight of voice, and freshness and make it your own,” he once told talk show host Charlie Rose.
Avoiding cliches, smirked Amis, doesn’t mean you look for synonyms and “wiggle” them around. The process for him was about being “faithful to your perceptions and transmit them as faithfully as you can. I say these sentences again and again in my head till they sound right. There’s no objective reason why they’re right, they just sound right to me….it’s just matching up perceptions with the words in a semi-musical way even if it is atonal.” Before we get to bot scripting, let’s try and qualify what is good journalistic writing. And that’s not easy. It is subjective, guided to some extent by what the reader is looking for. Herd words may be precisely that—they work for many, bringing comfort and the familiar. After reading about ‘headwinds’, ‘tailwinds’, and ‘unprecedented disruptions’ through the pandemic, a set of your regular readers may know exactly where you’re coming from and where you are going. The ‘dead words’ will live on. Or so you think. The only problem? Bots can emulate that penchant for the pedestrian. Let’s start with what a bot can do as well as you. Grammar? Check. Spellings? Check. Now let’s raise the degree of difficulty. Facts? Er, not always. Also read: Unravelling the ethics of large language models: Navigating concerns in ChatGPT and beyondEarly this year, The Washington Post reported that internet sleuths had discovered that tech website CNET had “quietly” published AI-Generated features; CNET described the move as an experiment. The apparent lack of transparency was compounded by inaccuracies in some of the stories; another tech site Futurism referred to them as “very dumb errors”. For instance, one automated article about compound interest incorrectly said a $10,000 deposit bearing 3 percent interest would earn $10,300 after the first year instead of $300. From accuracy, let’s move to bias. Can machines be prejudiced? In AI for You (Bloomsbury India 2022), authors Shalini Kapoor and Sameep Mehta write: “Bias in algorithms is dreadful, but we need to understand how it creeps in. Is it bias in the data fed into the models or is the bias of the data scientist creating the model?” Kapoor and Mehta give the example of Amazon scrapping its recruitment software as it’s “seemingly biased against women, and was choosing candidates based on 10-year-old data, with more men being selected. The software had downgraded rating of two women-only colleges as potential recruitment sources.” The factor of bias and bots is relevant at news organisations at a time when many are leaning on AI to assist writers. CNET’s editor Connie Guglielmo in a statement said the goal of the AI “experiment” was to see “if the tech can help our busy staff of reporters and editors with their job to cover topics from a 360-degree perspective”. The good news? Humans can’t be replaced in newsrooms. The not-so-good-news, as Kapoor and Mehta conclude: “It is humans who are biased and not AI.” Also read: Writer, adviser, poet, bot: How ChatGPT could transform politicsNow let’s get to where we started—the cliché. Bots thrive on them; then again, a lot of journalistic writing is ridden with truisms and hackneyed phrases, but it doesn’t have to be that way. As a writer—admittedly on a deadline—your challenge is to find an original turn of phrase that a bot can’t. It can’t, in Amis’ words, “match up perceptions with words in a semi-musical way”. When ChatGPT3 was told to write a poem in the style of Chicago-born poet Jaswinder Bolina, the results were predictable. As Bolina wrote in The Washington Post in April, the poem went on for “eight unremarkable stanzas before culminating in its hackneyed conclusion: Let us embrace the journey, / And all its twists and turns, / For in the end we shall find, /That every step we take, in life, we learn.” Journos may balk at Bolina’s advice on how to defend against the rise of ChatGPT: “Think like a poet,” he counsels. “Sure, I’m biased, but consider what the making of a poem can teach us. Diametrically opposed to cliche, poets are trained to invent and reinvent language to arrive at fresh expressions of our angst, joy, anguish and wonder.” A word of caution here: Attempting to be a Keats or a Donne in the newsroom can result in other forms of man-made disasters, so you don’t need to take Bolina’s advice to heart; just take his underlying message: That “the poet’s first job is to keep language from stagnating or, worse, from boring us to death”.