News AI: Outlets Continue Relying on Generative AI

News AI is becoming increasingly common and a plague for our society. Generative artificial intelligence cannot tell fact from fiction, and its usage in news reporting continues to get media outlets in trouble. But that hasn’t stopped mainstream media from utilizing it.

Most recently, Gannett paused its AI-generated news initiative after it published a hilariously bad mistake, referring to two teams in a high school soccer match as “The Worthington Christian [[WINNING_TEAM_MASCOT]]” and “the Westerville North [[LOSING_TEAM_MASCOT]],” showing just how incompetent these machines are when not properly monitored.

News editors should know better, but news AI continues to increase across these outlets, and many openly refuse to allow AI training on their content by blocking ChatGPT’s GPTbot via their robots.txt files.

Let’s explore how ChatGPT and other large language models are being utilized in the news and what that means for the future of the media.

TL;DR

  • Media outlets like Gannett, Buzzfeed, CNET, Men’s Journal, and the Associated Press use LLMs to produce news stories.
  • The Associated Press, American Journalism Project, and other outlets that support local news inked deals with OpenAI to help train their LLM.
  • 70 of the world’s top 1,000 websites blocked OpenAI’s GPTbot crawler, including the New York Times, NY Mag, Us Magazine, Reuters, Insider, Bloomberg, PC Magazine, The Verge, Polygon, Vox, Chicago Tribune, Thrillist, The Globe and Mail, and Axios. 
  • This comes at a time when the media is losing a record number of journalists, estimated at 17,436 jobs lost in the first half of 2023.
  • An estimated 57% of journalism jobs were lost from 2008-2020, according to Pew Research.
  • Only 34% of Americans trusted media in an October 2022 Gallup poll. This number is likely to drop in the age of AI-generated news.

Background

About OpenAI

OpenAI is a U.S.-based AI enterprise disguised as a research lab, founded in 2015 with a focus on developing “safe and beneficial” artificial general intelligence (AGI). Initially a non-profit, OpenAI transitioned to a “capped” for-profit model in 2019 to attract investments and top talent. Microsoft has been a key investor, infusing $1 billion in 2019 and bringing the total to $13 billion in 2023.

OpenAI introduced generative AI products like GPT-3 and DALL-E to the market, and has commercial partnerships with Microsoft. Its mission, governance, and for-profit transition have been subjects of public discussion. As of 2023, OpenAI is valued at $29 billion and has made strategic acquisitions like the New York-based start-up Global Illumination. It is on track to earn $1 billion in revenue in 2023.

The company faces criticism for outsourcing the annotation of toxic content to Sama, a company that paid its Kenyan workers between $1.32 and $2.00 per hour post-tax, while OpenAI paid Sama $12.50 per hour. The work allegedly left some employees mentally scarred.

OpenAI has also been criticized for a lack of transparency around technical details of products like ChatGPT, Dall-E 2, and GPT-4, which makes it difficult for independent research and goes against its initial commitment to openness. Legal issues have also emerged, including copyright infringement lawsuits from authors and potential action from The New York Times, as well as a lawsuit for violating EU General Data Protection Regulations. In response to this, the EU formed the European Data Protection Board (EDPB) in April 2023 for better oversight.

Examining News AI

 


Since its launch, OpenAI’s ChatGPT has been promoted as a miracle app that can be anything–a lawyer, doctor, engineer, artist, teacher, student, or journalist. Of course, this is typically marketing speak being sold to you by an engineer who does not have the first clue what any of those professionals actually does.

Journalists are held to strict ethical standards that AI news writers simply don’t have the programmed ability to follow. Although these rules vary by outlet, there are some common themes.

The Society of Professional Journalists (SPJ) outlines its Code of Ethics under four foundational principles:

Seek Truth and Report It

This principle emphasizes the importance of accuracy and fairness in journalism. Journalists should verify information before releasing it and provide context to avoid misrepresentation. They should also be courageous in holding the powerful accountable and give voice to the marginalized.

Minimize Harm

Ethical journalism entails treating all involved—sources, subjects, and the public—with respect. Journalists should weigh the public’s need for information against the potential harm or discomfort their reporting may cause. Special considerations should be made for vulnerable individuals.

Act Independently

Serving the public is the highest obligation in journalism, according to SPJ. Journalists should avoid conflicts of interest, refuse gifts or favors, and maintain a clear boundary between news content and advertising.

Be Accountable and Transparent

Journalists should take responsibility for their work, correct mistakes promptly, and be open about their decision-making processes.

By adhering to these principles, journalists aim to maintain the integrity of their profession and foster a trustworthy relationship with the public. However, it’s clear that these principles aren’t being followed by outlets taking heat for using generative AI in reporting news.

Buzzfeed News AI Debacle

Buzzfeed began using generative AI in its stories in January 2023. Less than a month after the articles started showing up online, it shut down its News division–Buzzfeed News–and laid off 15% of its staff. Meanwhile, the site is being filled with a never-ending barrage of user-submitted listicles featuring AI-generated images with clickbait titles like “AI Photos of Barbie Dolls from Every State

A controversial version featuring AI-generated Barbie dolls from various countries created a wave of backlash this summer and was subsequently taken down. It featured a variety of harmful racist stereotypes, including a machine gun-wielding militant Barbie from South Sudan wearing some very colorful clothing for a mercenary.

None of this is surprising, as both Buzzfeed and its subsidiary HuffPost grew on the back of unpaid contributors. This push to clickbait AI is clearly meant to draw monetizable traffic, regardless of whether the content is garbage or not.

Gun-wielding South Sudan Barbie by Buzzfeed and Midjourney
Gun-wielding South Sudan Barbie by Buzzfeed and Midjourney

CNET News AI Disaster

Tech news outlet CNET faced backlash from its human editorial staff after beginning to publish AI-generated articles that contained numerous errors in late 2022. The staff  unionized, forming the CNET Media Workers Union, which calls for better working conditions and increased transparency regarding AI use in content creation. This organizing effort, although initiated before the AI rollout, aims to set industry standards for the responsible use of AI in journalism.

The union’s demands gained urgency after AI-authored articles by CNET were found to be riddled with errors and even instances of plagiarism, prompting a reevaluation of the company’s AI strategy.

CNET’s AI debacle led to significant editorial changes and had a ripple effect on the tech journalism industry. The controversy highlights the ethical and practical challenges that arise when incorporating AI into sectors reliant on human expertise.

Amid job cuts and changes in editorial direction focused on monetization, the CNET union aims to safeguard the outlet’s journalistic legacy and set an example for responsible AI use in journalism. It remains to be seen how their actions will influence the wider media landscape and the evolving role of AI in content creation.

VentureBeat’s News AI Experiment

VentureBeat started using AI technology to assist in editing and writing articles, specifically utilizing Microsoft’s Bing Chat. Unlike CNET, VentureBeat took a more cautious approach. Editorial director Michale Nuñez describes the technology as “another person on the team” and encourages reporters to include AI-written sentences only if they are accurate and can be independently verified. The publication does not disclose the use of AI in its content as long as its usage is limited and authentic.

VentureBeat’s approach reflects the industry’s growing awareness of the potential pitfalls and ethical dilemmas associated with AI-generated content. While AI is not expected to fully replace human writers, there are concerns that its increased use could reduce the need for human editorial staff. VentureBeat claims to integrate AI responsibly, striking a balance between technological efficiency and maintaining journalistic integrity.

The publication has thus far not published any AI-generated mistakes of note.

Men’s Journal AI News Gaffe

Arena Group, the publisher of Sports Illustrated and Men’s Journal, ventured into AI-generated content for their publications in February 2023. CEO Ross Levinsohn assured readers the AI’s role was to enhance quality, not just to produce more content.

However, the first AI-written article in Men’s Journal about low testosterone raised serious concerns. A medical expert, Bradley Anawalt, found the article to be filled with factual errors and mischaracterizations, putting the reliability of AI-generated medical advice into question.

Despite the setbacks, Arena Group is proceeding cautiously, making corrections to the criticized article and stating that their AI experiments are a “work in progress.”

Gizmodo’s Problematic News AI

Gizmodo‘s parent company, G/O Media, fired the staff of its Spanish-language site Gizmodo en Español in September 2023, replacing them with AI translations of English articles. The move caused issues such as articles switching languages halfway through and comes amidst broader concerns about factual inaccuracies in AI-generated content.

This decision is part of a growing trend among media companies to use AI as a cost-cutting measure, but it’s been criticized for ethical reasons and for lowering content quality. The move is seen as a significant loss for Spanish-speaking audiences seeking original, nuanced reporting.

And G/O Media also owns a lot of high-profile sites, like Kotaku, Jalopnik, Deadspin, The Onion, Quartz, Jezebel, The Root, and The A.V. Club. The use of AI came after some workers lost their jobs for refusing to relocate to Los Angeles after shutting down the Chicago office.

Is the Future of Media in Danger?

The increasing reliance on AI in news generation presents a dire picture for the future of responsible journalism. The string of missteps—ranging from laughable errors to damaging inaccuracies—reveals the inadequacy of AI in adhering to journalistic ethics and standards. AI’s inability to discern fact from fiction, contextualize information, and act independently poses not just a threat to the quality of news but also to the trust the public places in media outlets.

It’s not stopping outlets from pushing forward in news AI though–On September 15, 2023, the New York Times posted a six-figure job opening for a Newsroom Generative AI Lead.

The drive to cut costs and boost content generation speed may be financially appealing for media companies in the short term, but it risks further eroding public trust in journalism—a profession already plagued by skepticism and dwindling credibility. It is imperative that media outlets institute rigorous guidelines and controls over how AI is utilized in news generation, including full transparency about the use of AI in reporting and ongoing human oversight to ensure accuracy and ethical compliance.

This is not merely an issue for journalists or media companies to grapple with; it’s a societal concern. When journalism fails, democracy suffers. The guardrails for AI in journalism need to be erected not just to protect the sanctity of the profession but to preserve the essential role that a free press plays in a democratic society.