2023: The Year in AI Controversies

Artificial intelligence was the buzziest industry of 2023, and it’s hard to believe so much happened in one year. While AI bros laud the rapid development pace, the evidence shows a much different story, as only 3.8% of companies use AI, according to the US Census Bureau.

There’s a reason so many companies are hesitant to implement this technology–early adopters faced a variety of problems throughout the year that took a major toll on their businesses.

Here’s a chronological look back at the AI controversies that dominated the headlines over the past 12 months.

January

CNET AI Debacle – CNET was the first of many news outlets to experiment in generative AI, but its owner Red Ventures paused this initiative after readers noticed it was publishing misinformation. An internal review found over half of its AI-written stories contained factual errors.

New York Schools Ban AI – As ChatGPT grew in popularity, students began using it to cheat. It wasn’t long before school districts, like New York City’s Education Department, banned its usage in the classroom. The AI ban would be lifted in May 2023.

4Chan Celebrity Hate Speech – ElevenLabs rose in popularity for its generative AI voice technology as users began posting AI-generated deepfake voices of celebrities like Emma Watson and Joe Rogan saying racist, transphobic, and violent things.

ChatGPT Authors Banned – Major publishers around the globe began banning work authored by ChatGPT and other LLMs as AI-generated writing submissions flooded their offices. This ban included leading scientific journals and even popular sci-fi magazine Clarkesworld.

Getty Sues Stability AI – Stock photo giant Getty Images filed a copyright infringement lawsuit against Stability AI, seeking damages of up to $1.8 trillion for training its AI image generator Stable Diffusion on Getty images without paying for the proper licenses.

Artists Sue Midjourney – The first class action lawsuit copyright infringement lawsuit against AI companies this year came from a trio of artists–Kelly McKernan, Karla Ortiz, and Sarah Andersen–against Stability AI, Midjourney, and DeviantArt.

OpenAI Kenyan Worker Scandal – A bombshell report showed OpenAI outsourced Kenyan labor for content moderation, paying workers less than $2 per hour. This moderation work psychologically damaged workers, and they voted to unionize alongside moderation workers for TikTok and Facebook in May.

February

AI Seinfeld Twitch Ban – An AI-generated show about nothing that parodies popular sitcom Seinfeld was banned from Twitch for hate speech after its main character told a series of bigoted jokes on stream.

Google Bard Launch Snafu – Google launched Bard in a major media push to compete with OpenAI’s ChatGPT, but it made a critical factual error about the James Webb Space Telescope during the demo. Google shares lost $100 billion after the error.

ChatGPT Mass Shooting Response Vanderbilt University came under fire for using an AI chatbot to write a consoling email to students following a mass shooting at Michigan State University. No AI detector was needed to see the statement “Paraphrase from OpenAI’s ChatGPT AI language model, personal communication.” at the bottom of the email.

Men’s Journal AI Embarrassment – Men’s Journal was called out by a health expert on social media for publishing at least 18 “inaccuracies and falsehoods” attributed to an AI writer. It would only be the beginning of a controversial year for its parent company Arena Group.

Dumped by AI Girlfriends – AI chatbot company Replika offered users a premium experience with AI companions. Users flocked to the site out of loneliness, naming their AI mates and getting into serious relationships…until the company changed the erotic settings and forced everyone into platonic friendships instead.

Netflix AI Backlash – Netflix faced backlash online as its Japanese subsidiary claimed a labor shortage forced it to use AI-generated background images for the short film Dog and Boy. Critics pointed to the company’s history of paying animators as low as $34 per cut and cried foul as the company was clearly attempting to degrade human labor using technology.

Atrioc Deepfake Twitch Controversy – Popular Twitch streamer Atrioc livestreamed his screen to show he was visiting deepfake porn sites and looking up several of his female streamer friends, including QTCinderella.

March

ChatGPT Ransomware – ChatGPT grew in popularity, and that means various nefarious use cases arose, including the app being used to build ransomware and other forms of malware. In addition, ChatGPT phishing websites and apps burst onto the scene, luring unsuspecting users into revealing personal and private information.

AI Lawyer Malpractice – DoNotPay Inc positioned itself as offering an AI attorney and sought lawyers to allow it to act on their behalf. However, it was sued by a Chicago-based law firm for practicing law without a license. The case was dismissed in November, with the judge ruling the plaintiff did not provide evidence they were harmed.

Italy’s ChatGPT Ban – ChatGPT was temporarily banned from Italy over privacy concerns. The app was later reinstated in late April after submitting the satisfactory changes to appease the local government.

AI Pause Letter – A collection of AI cultists signed an open letter to pause AI for six months. This became the talk of the AI industry and media, as many prominent names like Elon Musk, Steve Wozniak, and Yoshua Bengio signed the letter.

Levi’s and Adobe Fake Diversity – Both Levi’s and Adobe faced backlash on Twitter, as the companies attempted to use AI to generate diversity that didn’t exist. This AI-generated cultural appropriation attempted to ethics wash brands to check DEI boxes without actually employing minority groups.

Microsoft Cuts AI Ethics Team – Microsoft laid off one of its teams handling AI ethics, causing controversy in the wake of rising unethical AI concerns.

Discord Reverses AI Policies – Discord is a popular platform for AI companies, but the release of its AI chatbot Clyde was met with disdain. Users were appalled at the company deleting a clause in its privacy policy stating it wouldn’t store the contents of calls, streams, or channels. The clause was quickly reinstated.

Snapchat ChatGPT Backlash – Snapchat faced a pile-on of bad reviews after introducing its ChatGPT-powered AI chatbot feature called “My AI.” The chatbot was plastered at the top of the screen and couldn’t be removed without paying for a Snapchat Plus subscription.

April

Samsung Bans ChatGPT – Samsung banned its employees from using ChatGPT after proprietary company secrets were uploaded into the app. Large companies soon followed in their footsteps, fearing sensitive and confidential data could be leaked.

Pharma Bro’s AI Doctor – Fresh off his prison sentence for fraud and a permanent ban from the pharmaceutical industry, Martin Shkreli launched a ChatGPT-based chatbot called Dr Gupta purporting to be an AI medical doctor. He paired it with a similar AI-powered veterinarian and took to Twitter to proudly brag about not following HIPAA guidelines while accessing any private data he wants.

Hollywood Writers Strike – A variety of problems plagued the Writers Guild of America, leading to a historic nationwide strike that lasted 148 days and defined the American summer. AI was one of the key points of debate between labor unions and the studios.

AI Drake and The Weeknd – An AI-generated song began making the rounds on social media featuring machine-imitated voices of Drake and The Weeknd. The song was quickly removed from TikTok, Spotify, and YouTube for copyright infringement but was submitted for a Grammy award anyway.

Photographer Sues LAION – A German stock photographer attempted every path possible to get his photos removed from an the LAION AI training datasets, but he was unsuccessful. In fact, LAION sent him a bill for $979 for making a frivolous claim, which led to the photographer filing a lawsuit in a German court.

Google Employees Deride Bard – Internal messages leaked from Google showing the company’s employees do not believe in the product. Sentiments ranged from it being incompetent to an outright liar after CEO Sundar Pichai emailed all employees to request they use it.

Fake AI Interview – German magazine Die Aktuelle received backlash after publishing a fake, AI-generated interview of Formula 1 driver Michael Schumacher.

Elon Musk Fake Deepfakes – Elon Musk set a precedent by arguing that real videos of him making false claims about Tesla autopilot are in fact AI deepfakes, further blurring the lines between reality and AI.

May

AI Eating Disorder Helpline Scandal – The National Eating Disorder Association (Neda) told employees it’s replacing them with an AI chatbot at the start of the month. By the end of the month, the chatbot was decommissioned after giving out harmful and potentially deadly advice.

Professor Fails Entire Class – A Texas A&M University professor asked ChatGPT if it wrote his students’ papers for them. When it stated that it did, he flunked his entire class, although it’s important to note that ChatGPT was not designed for this and took credit for essays it may not have written.

Spotify AI Song Ban – Spotify was forced to remove tens of thousands of AI-generated songs created by AI-generated music service Boomy after Universal Music Group flagged them for using bots to boost streaming numbers.

Amnesty International Fake Protests – Amnesty International is a human rights advocacy group that typically raises awareness of very real atrocities in the world. In May, it came under fire for using AI-generated images of police brutality in Colombia, something completely unnecessary as the events are very real and well documented.

Bloomsbury AI Cover Controversy – NYT bestselling author Sarah J Maas drew criticism from fans as her publisher Bloomsbury was caught using AI-generated images for the cover of her book.

Photojournalist Cuban AI Scandal – Photojournalist Michael Christopher Brown received backlash for using Midjourney to create a series of images documenting historic Cuban and the realities of Cubans trying to cross the ocean between Havana and Florida.

Sudowrite Controversy – ChatGPT writing app Sudowrite fell into the center of AI controversy over questionable training tactics after a series of faux pas led users to question where it trained its data from. This includes asking for unpublished submissions for a different use case and understanding a specific sexual act known only to a specific internet community.

June

OpenAI AI Scraping Lawsuit – ChatGPT maker OpenAI was hit with a wide-ranging consumer class action lawsuit over its data scraping policies.

OpenAI Hallucination Lawsuit – OpenAI was also hit with its first defamation lawsuit over “hallucinating” false information about Georgia radio host Mark Walters. The app accused him of embezzlement, something he wasn’t in the position to do as he never worked as treasurer nor CFO of the organization he’s accused of embezzling from.

DeSantis and Trump Deepfakes – Ron DeSantis and Donald Trump became the first US Presidential candidates to use AI deepfakes during their early campaigns in a heated summer rivalry leading into the republican primaries.

Cover Contest AI Controversy – The Self-Published Fantasy Blog-Off (SPFBO) was buried in controversy and forced to shut down after its winning cover was determined to be AI. The cover’s artist wrote a blog outlining his artistic process that was soon determined to be completely faked. It was a collage of Midjourney outputs, and the fake artist soon disappeared off the face of the earth, with some assuming him to be author Matthew Prindle himself.

Disney’s Secret Invasion Backlash – Disney and Marvel received backlash online after using generative AI for the show’s opening credits. It was part of a lackluster year for the studio and created friction between artists/animators and one of the largest animation studios in the world.

Emad Mostaque’s False Past – The CEO of Stability AI has a history of fabricating and exaggerating his past, according to a damning report from Forbes. The investigation uncovered a variety of inconsistencies and improprieties in his statements over the years.

Lawyer Sanctioned Using ChatGPT – New York attorneys Steven Schwartz and Peter LoDuca were sanctioned by a judge for using ChatGPT to write a legal brief. The LLM made up six fake cases that didn’t exist,  leading to a $5000 fine for the attorneys and their firm.

AI Detector Deemed Ineffective – Even AI detectors are a scam, and LLM-detection tool Turnitin (which by this point was used on over 38 million student papers) was found to not be reliable in detecting the use of generative AI.

July

Google AI Scraping Lawsuit – Google was hit with a class-action lawsuit over illegal data scraping by a group of eight individuals seeking to repressent millions of internet users and copyright owners.

Novelists Sue Meta and OpenAI – A group of novelists, including Sarah Silverman sued Meta and OpenAI claiming their LLMs violated their copyrights by training on their books that were not made available in whole on the public internet.

Hollywood Actors Strike – The Screen Actors Guild-American Federation of Television and Radio Artists joined the WGA in union labor strikes related to a variety of payment issues revolving around streaming and the use of generative AI.

WormGPT and FraudGPT Launched – What if there was a version of ChatGPT trained specifically to perform cyber crimes? That’s the question the developers of WormGPT and FraudGPT hope to answer with their malicious LLMs.

Gizmodo AI-Generated Star Wars – G/O Media, the owner of websites like Gizmodo, Kotaku, The Onion, Jalopnik, and Quartz, started using AI-generated content to the chagrin of its readers and staff. Its first article was a Star Wars listicle filled with fake “facts” that stirred a whole new level of backlash.

Buzzfeed’s Racist AI Barbies – Buzzfeed’s announcement of generative AI followed by closing its News division and laying of 15% of its staff caused a furor online. Its AI-generated clickbait slop is formulaic and barely palatable, but it reached new lows with racist depictions of Barbies from around the world, including an SS Nazi general from Germany and a South Sudan Barbie armed with an assault rifle.

Voice Actress AI Controversy – Voice actress Erica Lindbeck (Persona 5) was forced to delete her Twitter account after being dogpiled by trolls for attempting to control the usage of her voice without her consent and compensation.

Sam Altman’s Worldcoin Obsession – Every AI sales pitch only works if we have Universal Basic Income (UBI), and OpenAI CEO Sam Altman wants to provide this through his dystopian crypto project Worldcoin. The orbs launched in July, and the backlash quickly followed, as it’s 2023 and everybody already knows crypto is a massive scam, especially Worldcoin, which was raided in by authorities in Kenya.

August

Recipes for Death – AI-powered chatbots are continuously being implemented by businesses without much idea of how they work or even what they do. A grocery store used a meal-planning bot that gave people recipes ranging from weird (Oreo vegetable stir fry) to deadly (chlorine gas).

Gannett’s AI Sports Reporter – A local news Gannett affiliate had to pause its AI-generated writer trials after it was caught due to a botched story citing the [Winning Team Mascot]’s triumph over the [Losing Team Mascot].

OpenAI Fortune 500 Fib – The launch of ChatGPT Enterprise came with an official announcement from OpenAI that “80% of Fortune 500 companies use ChatGPT.” Of course, this number is a lie, as a footnote at the bottom clarifies it really means that people used work email domains to sign up for accounts.

Dungeons and Dragons and AI – D&D maker Wizards of the Coast came under fire for releasing a source book Bigby Presents: Glory of the Giants, which included generative AI images. The backlash caused the publisher to update its policies to prohibit AI in future releases.

Twitter/X Updates AI Policy – For the second time in as many months, Elon Musk caused a mass exodus from his social media platform after announcing changes to Twitter’s privacy policies that included training AI on user data.

Amazon Fallout Series AI Backlash – Amazon’s Fallout series had fans excited until its poster release was made by generative AI. Fans online were quick to point out a series of logical fallacies in the image, including roads to nowhere, cars with two rears and no fronts, and people walking in the middle of the street.

Amazon Selling AI Knockoffs – Author Jane Friedman found multiple books for sale on Amazon using her name and with similar covers. However, she didn’t write the books–they’re AI-generated fakes. The company removed the books but failed to provide real protections for authors.

Adobe’s Unethical AI – Despite promoting itself as ethical AI, multiple artists accuse Adobe Stock of profiting off their names without their permission. Artist names (and protected IP) continue to be found as image prompts and descriptions in Adobe Stock, despite safeguards in place to stop these generations in its Firefly product.

Zoom AI Privacy Policy Sucks – People don’t like the idea of having their Zoom calls recorded to train AI. The company learned this the hard way when a change to its policies created an uproar on social media. It soon had to revert its TOS back to stop the backlash and an exodus from the company.

Major News Publishers Block ChatGPT – As soon as OpenAI allowed the ability, news publishers started blocking its bot from accessing their sites. As of October, 44% of 1,123 news publishers were actively blocking ChatGPT from training on their articles via robots.txt.

Irrelevant YouTuber Fakes AI – Kwebbelkop hasn’t been relevant in years, but he’s still drawing views and money across social media regardless. He leveraged his platform to promote an AI replacement of himself, although he later had to change his story after social media users pointed out how impossible his “AI” was. Undeterred, the man (real name Jordi Van Den Bussche) continues trying to get fans to sign up for his pyramid scheme with fake AI technology he doesn’t have.

Deadly AI Mushroom Books – AI-generated books are a problem, and a 404 Media report outlined this problem using mushroom foraging books. The AI-generated slop told users potentially deadly information if followed.

September

“Useless at 42” – Microsoft used an AI text generator to create an obituary for deceased NBA player Brandon Hunter, calling him “useless at 42.”

Chinese Political Ops – Microsoft researchers say they found a network of Chinese-controlled fake accounts on social media using AI to sway U.S. voters.

Authors Sue OpenAI – John Grisham, Jodi Piccoult, and George R.R. Martin are among the dozen authors who filed a lawsuit accusing OpenAI of systematic plagiarism on a mass scale.

Amazon’s Self-Publishing Disaster – After a series of PR disasters related to AI, Amazon decided to do something about it. It set a daily limit on uploads for self-published authors to three–a totally real and achievable number for humans to reach.

Spain’s Deepfaked Children – Nonconsensual deepfakes are growing in number, and it’s impacting children around the world. Spain was hit with scandal as high school girls (all minors) reported receiving AI-generated nude images of themselves.

Smash or Pass – A Harvard alum recreated Meta CEO Mark Zuckerberg’s initial FaceMash website asking users to rate the attractiveness of AI-generated women. The project received immediate backlash on social media from the non-incel communities.

Closed-Door AI Summit – US Senator Chuck Schumer began hosting private AI Insight Forums to meet with AI leadership and determine the course of AI regulation. Critics fear the closed-door meetings could lead to regulatory capture and allow the AI companies to dodge responsibility for their actions.

Coca-Cola’s AI Soda – Coca-Cola led the charge in generative AI with a sleek commercial earlier in the year. It took things a step further with an AI-generated flavor called Y3000, which was almost universally panned by critics for tasting absolutely awful.

Pastor’s AI-Generated Sermon – Austin, Texas pastor Jay Cooper used ChatGPT to write a sermon and promoted it on social media in an effort to get more people to attend his service. The PR stunt was ultimately a flop, and the pastor admitted this one-time deal wasn’t worth the effort as its writing was bland and uninspiring.

October

Microsoft’s Cause of Death – Microsoft was accused of damaging The Guardian’s reputation after AI-generated quizzes appeared on its content on MSN. One particularly disturbing poll asked readers to guess the cause of death of a woman in a news report.

MrBeast Deepfake Ads – MrBeast and two BBC presenters were among the public figures who discovered AI-generated ads on TikTok using their names and likenesses in a fraudulent scheme to enrich themselves. Soon Tom Hanks noticed the same thing with his likeness.

UK Political Fake News – A UK Labour Party politican Keir Starmer became the first victim of political deepfakes in England. The AI-generated audio was used to spread disinformation and could cause problems for upcoming elections.

Fake Product Reviews – Gannett again found itself in hot water after AI-generated product reviews were discovered on its website Reviewed.

AI Political Robocalls – NYC Mayor Eric Adams sparked controversy using generative AI voice clones of himself to make robocalls in various languages.

UMG Sues Anthropic – Universal Music Group and various other music labels filed a lawsuit against Anthropic for training its AI on their song lyrics without consent, compensation, nor credit.

ChatGPT Phishing Expedition – Researchers found ChatGPT’s launch caused a 1265% increase in phishing emails, as the app allows bad actors to automate their responses much more efficiently than templates.

Fugees Rapper’s AI Conviction – Rapper Prakazrel “Pras” Michel filed a motion for a new trial claiming his previous lawyer David Kenner used AI to write his closing arguments. He faces 20 years in prison for the conviction, and claims his attorney failed to provide cogent defense.

Prom Pact Creepy Extras – Disney again was under fire after Twitter users resurfaced clips from its Prom Pact movie using AI actors in the background. It mostly went unnoticed at release but sparked backlash after being re-examined on social media.

November

Sam Altman Succession – OpenAI is a leader in the AI race, and the tech industry was shocked when it announced CEO Sam Altman was fired. This kicked off a dramatic week in which 700 of 770 OpenAI employees tried to walk out (and received job offers from Microsoft) before he was rehired and the board was reshuffled.

Nonconsensual Classmate Deepfakes – Mirroring the prior scandal in Spain, a New Jersey high school (Westfield) made national headlines after nonconsensual deepfake nudes were made of multiple female students by male classmates.

Big Four False Allegations – Sometimes AI misinformation comes from the academic world, and a group of academics were forced to apologize after falsely accusing the Big Four consulting firms of wrongdoing. They used ChatGPT and didn’t fact check the outputs.

Sports Illustrated AI Scandal Arena Group again landed under fire for publishing nonsensical AI generated articles using fake author profiles. The official company statement blamed unlabeled sponsored content from a third-party ad publisher, and it wasn’t long before CEO Ross Levinsohn was fired for the snafu.

Bad Bunny Bad AI – Reggaetón sensation Bad Bunny was not impressed by an AI-generated deepfake using his voice released by FlowGPT. The song NostalgIA emulates both the Puerto Rican singer and Spanish performer Bad Gyal, and the artists immediately called for it to be taken down.

Disney’s Creepy Thanksgiving Image – Disney again drew heat from critics for its Thanksgiving image that looked eerie and uncanny. Originally adapted from a Norman Rockwell painting, social media users were quick to point out inconsistencies in the new version that transformed the painting into a lifeless 3D-generated mess.

Gaza War Deepfakes – AI-generated misinformation is a problem being amplified by the Israel-Hamas war in Gaza. This violent war is filled with enough real-life tragedies and atrocities, but social media made it worse with AI-generated images of dead babies, dead bodies, and more. Even Adobe Stock got in on the action by selling AI-generated war images in both Israel and Palestine. The waters were further muddied with deepfakes of celebrities like Bella Hadid making statements about the conflict.

Conference with AI-Generated Women – The Devternity Web3 conference was cancelled amid controversy over the founder creating AI-generated women as speakers. Making matters worse, continued investigation found he created an AI IG influencer too. At a certain point, it’s almost easier to just hire a real woman.

CivitAI Deepfake Scandal – CivitAI drew criticism over its bounty program that encourages (and incentivizes) nonconsenual deepfakes. Although most are celebrities (mostly women), at least one non-public person was found among the bounties. And even worse–the a16z-funded platform was found to create large volumes of child sexual abuse material (CSAM).

December

Google Fakes Gemini Launch – Google showed off its Gemini AI in a launch demo video in December 2023 with some impressive features. The problem is users quickly discovered the entire thing was faked, and it’s not possible to do anything it showed in real time the way it presented it.

Grok Copies ChatGPT – Elon Musk’s XAI released a Grok chatbot late in the year, and it seems to be repeating some lines from its rival ChatGPT. When asked by a user to do something nefarious, it responds that it can’t because it would violate OpenAI policies.

NYT Ignores Women – There are a lot of influential women in AI, but the New York Times couldn’t think of a single one when creating its list of “Who’s Who” in modern artificial intelligence, instead celebrating men like Elon Musk and “internet philosopher” Eliezer Yudkowsky

Another Lawyer Uses ChatGPT – Donald Trump’s former attorney Michael Cohen’s current attorney apparently used ChatGPT or a similar LLM to draft a legal brief. The judge ordered David M Schwartz to either prove the cases exist or explain why he should not be sanctioned for using AI.

Ongoing AI Scandals

AI Undress Apps – AI undress and deepfake apps are rising in popularity and continue to gain traffic as time goes on. They mostly impact women and are largely disregarded by men. More regulation is needed quickly as this trend will not soon die down.

AI False Arrests – Like nonconsensual porn, false arrests based on AI have existed for years and continue in the year of AI. It impacts mostly minorities, with black people overwhelmingly on the short end of the stick.

Deepfake Voice Scams – Deepfake voice scams are getting more sophisticated, as criminals only need 30 seconds of audio to create a convincing fake. Common scams include fake kidnappings and more in what’s being dubbed “vishing.”

AI Gender/Race Bias – AI has known gender and race biases that continue to come out in both image and text generators. These biases require consistent DEI training as ethical AI teams are disbanded across Silicon Valley.

Submissions Closed – Despite publications around the world banning AI generated submissions, they continue to be submitted anyway by garbage-peddling grifters who don’t care about the rules. This caused many places to close off submissions altogether leading to an overall tougher marketplace for creatives.

AI Jobs Displacement – AI is an inanimate software tool, but still companies and executives insist on replacing human workers with AI, despite the many documented dangers above.

AI See Dead People – From AI companies using the likeness of dead public figures like Robin Williams and Vincent Van Gogh to offering to create digital avatars of loved ones, it’s clear AI technology will continue disrespecting the dead whether we like it or not.