Meta Is All-In on AI-It’s a Shame It’s Unethical
Meta held its annual Connect developer conference this week and announced a slew of new tools utilizing artificial intelligence. This includes the launch of its AI Studio, a platform that lets businesses build AI chatbots for Facebook, Instagram, WhatsApp, and Messenger.
It also released its own Ai chatbot characters, which feature various celebrities, including Snoop Dogg, Tom Brady, Naomi Osaka, LaurDIY, Chris Paul, Paris Hilton, Kendall Jenner, and MrBeast. It also has its generic chatbot called Meta AI.
The chatbots are powered by Meta’s Llama 2 large language model, built with Microsoft as its preferred partner. However, unlike OpenAI’s GPT-based LLMs, Meta made it free for anyone to access.
And it doesn’t stop there–Meta also unveiled AI-powered photo-editing features for Instagram, including Restyle and Backdrop. The former allows you to apply an artistic style to an image, while the latter can change the background of an image. And it has AI stickers supported by Llama2 and Emu, its proprietary image-generation model.
These AI-powered features will soon be rolling out to all of Meta’s platforms, including its Quest 3 virtual reality headset and Ray-Ban Meta smart glasses.
While this may add some fun toys to play with at Meta, critics question the ethics surrounding its models. The company has an abhorrent generative AI policy, and its opt-out form is buried deep in the website. Making matters worse, anybody who filed an opt-out request when it launched in August received an email nearly a month later stating essentially “prove it.”
With Meta proving how impossible it is to avoid unjust data practices in AI, let’s explore how this international enterprise conglomerate became one of the most exploitative data collectors on the internet.
TL;DR
- Facebook is the largest social media platform on the internet, with over three billion monthly active users.
- Meta is quickly becoming a leader in AI, rolling out a variety of language and image models based on both proprietary and third-party training data.
- The company is integrating AI into all of its products, from Facebook and Instagram to its Quest 3 VR headset and Ray-Ban smart glasses.
- Its AI models are reportedly trained on at least Facebook and Instagram public posts, but the company says it does not train on DMs.
- Meta received backlash for its opt-out policy, which thus far refuses to allow anybody to actually opt out of anything at all.
- Meta has partnerships with a roster of celebrity influencers, causing discomfort to many users who aren’t comfortable with AI’s ability to mimic people.
- Despite the privacy and data exploitation concerns, Facebook is the largest social media network on the internet, and many users are flocking to Meta’s Threads platform to get away from the dumpster fire of Elon Musk’s X/Twitter.
Background
About Meta
Meta Platforms, Inc., formerly known as Facebook, Inc., is a U.S.-based multinational technology conglomerate headquartered in Menlo Park, California. It owns a variety of social media platforms and services like Facebook, Instagram, Threads, and WhatsApp. Despite its diversification into other areas, including hardware such as Oculus VR, advertising remains the primary revenue generator for the company, making up 97.5% of its total income in 2022. Meta is considered one of the “Big Five” tech companies in the U.S., alongside Alphabet, Amazon, Apple, and Microsoft.
In October 2021, the company underwent a significant rebranding, changing its name to Meta Platforms, Inc. This rebranding aimed to align the company more closely with its vision of building the “metaverse,” an integrated digital environment that links all of Meta’s various products and services. This shift signaled Meta’s long-term strategic focus, moving beyond being just a social media company to becoming a leader in the emergent field of virtual and augmented reality.
Despite its ambitions, Meta faces challenges ranging from regulatory scrutiny to competition from platforms like TikTok. In early 2022, the company reported a greater-than-expected decline in profits and zero growth in monthly users. It also dealt with consequences resulting from Apple’s privacy policies, expecting to lose around $10 billion in advertising revenue. With increased competition and recent missteps, including mass layoffs and declining revenues, Meta is navigating a tumultuous period as it aims to transition from a social media giant to a metaverse pioneer.
Getting Versed in Meta
In February 2004, a group of Harvard students were on a mission to create social media. Mark Zuckerberg, Eduardo Saverin, Andrew McCollum, Dustin Moskovitz, and Chris Hughes created The Facebook.
Initially a hot-or-not site where mostly male students rated their mostly female peers, Facebook was only available to Harvard students at first. It soon expanded in other college campuses while also allowing military personnel with a valid government-issued email address to sign up.
Co-founder Zuckerberg faced early legal challenges, being sued by former collaborators Cameron and Tyler Winklevoss who claimed he stole their concept; the lawsuit eventually settled in 2008. Despite these obstacles, the platform grew rapidly by opening up to other educational institutions and eventually to the general public. Investments flowed in, including a $500,000 injection from PayPal co-founder Peter Thiel and later a $12.7 million investment from Accel. Because of the success, Zuckerberg dropped out of Harvard to focus on the burgeoning company, which officially dropped the “the” and changed its name to simply “Facebook.”
Facebook continued to grow, reaching 12 million users by December 2006 and launching features like the Marketplace and Application Developer platform. Its initial public offering in May 2012 raised $161 million but was marred by allegations of impropriety and technical glitches, leading to legal challenges. Nonetheless, Facebook moved forward, making significant acquisitions like Instagram for $1 billion and WhatsApp for $19 billion. It also launched Facebook Messenger as a standalone app and entered the virtual reality space with the $2 billion purchase of Oculus VR.
As Facebook matured, it faced growing challenges, such as a proliferation of fake news and hate speech on the platform. Efforts to combat these issues included a feature allowing users to flag false stories and updates to guidelines on hate speech. However, this did little to stop it, as the platform remains one of the biggest and fastest spreaders of disinformation.
Despite the controversies, Facebook’s market capitalization continued to grow, reaching $200 billion in Summer 2014. The company also expanded its feature set, launching Facebook Live, 360 video, and Instant Articles (which it retired in 2022), among others. By 2015, half of the world’s internet users were on Facebook, and at over three billion MAUs, that number is now nearing half the human population of 8.1 billion.
Meta Controversies
Meta faced a series of controversies over the years including workforce layoffs, privacy violations, and ethical challenges.
One of the most notable scandals was the 2018 Cambridge Analytica incident, where data from up to 87 million users was improperly harvested for political purposes. This incident led to Zuckerberg testifying in front of Congress and increased calls for tighter regulations on data privacy.
Financially, the company experienced significant market value loss, including a historic $119 billion drop in a single day following their Q2 earnings report, although it did manage to show some areas of revenue and user growth.
Facebook’s reputation suffered further from other privacy-related issues. The company’s Onavo VPN service was revealed to collect user data, raising further concerns about Facebook’s data practices. This spyware VPN (hilariously dubbed Onavo Protect) was purchased for $200 million in 2013 and shut down in 2019 amid fines from regulators around the world.
There were also vulnerabilities that exposed over 533 million users’ data (for which it was fined $276 million), issues with third-party companies having undue access to private messages, unprotected servers exposing even more user data for both Facebook and Instagram, and various scandals related to improper data collection and sharing of its users.
Despite these setbacks, the social media giant tried to reassure the public by suspending problematic apps and making efforts to improve security and privacy, including the deletion of 3.2 billion fake accounts in 2019 and 1.3 billion fake accounts in 2021.
Facebook attempted to diversify its offerings by launching new products and services. These included the Portal video communication devices, which faced their own privacy criticisms, and the Facebook Watch on-demand video service. Despite these releases, the platform struggled to maintain its appeal among younger users, facing stiff competition from other social media outlets like Instagram, Snapchat, and YouTube. This caused the company to shutter Facebook Watch in 2022 and cancel its original programming.
The company laid off 11,000 employees in 2022 in the aftermath of the COVID-19 pandemic and 10,000 more in March 2023, while also accumulating massive fines, including a record $1.3 billion for violating EU privacy laws and an even more astounding $5 billion from the US Federal Trade Commission. Legal challenges are also mounting against Meta for the alleged negative impact of its platforms on children’s mental health, thanks largely to the information provided by Facebook whistleblower Frances Haugen.
The company’s commitments to other sectors like journalism have also come under scrutiny over the years. Meta pledged $300 million to journalism but reportedly disbursed only a fraction of that amount to local news providers before shuttering the program entirely. It has also been criticized for its heavy lobbying against U.S. antitrust, casting further doubt on its ethical standing.
Meta prioritizes profits over ethical considerations and user safety, but the significant financial penalties and legal issues it accrued are seen as insufficient to cover the harm it has caused, raising questions about the company’s commitment to responsible corporate behavior. These questions only become more pronounced in the age of AI.
Jumping Trends from Metaverse to AI
In 2023, Meta made the audacious move of releasing its AI large language model, Llama 2, to the public. This decision challenged the guarded approach of industry giants like Google and OpenAI. With Llama 2’s capabilities being compared to ChatGPT, the release opened the door for more companies to develop their own customized chatbots. While Meta and it collaborators laud the move as a win for innovation and safety, skeptics point out the ethical and security risks associated with such openness, along with the lack of transparency about model weights.
The move “democratizes” AI development, allowing smaller players to contribute and even outperform original models. This collective intelligence is marketed as providing more robust and less biased AI systems. However, it raises concerns about technology’s “monoculture,” where a single security vulnerability could have widespread repercussions.
High-profile influencers caution against a hypothetical doomsday where AI could inflict catastrophic harm, but these doomers are little more than conspiracy theorists jockeying for attention and misleading regulators and the general public about the real risks associated with using AI.
The real risks of AI have nothing to do with a mythical fantasy AGI that will magically turn our GPUs sentient. In reality, AI causes real risks that include perpetuating harmful stereotypes, falsely attributing words or tags to vulnerable populations, homogenizing culture into a westernized facsimile, and more. Racial, ethnic, religious, gender, and age biases run rampant in AI models, and it’s unclear exactly what Meta is doing to combat that.
Making matters worse–the models are trained on sketchy data, and it’s unclear just how ethical (or not) these models are.
Llama 2, for example, is trained on two trillion tokens, with human fine-tuning adding over one million annotations. Code Llama includes over 500 billion tokens of code utilizing programming languages like Python, C++, C#, Java, PHP, and JavaScript. Like other unethical LLMs, it claims to be trained on “publicly available online data sources,” which means that if it’s published online, they will train on it. This is in stark contrast to “public domain” data, which is meant for the public to use in this manner.
Similarly, Meta’s Emu whitepaper refuses to address where they obtain the training data from, only mentioning that they pre-trained on 1.1 billion text-to-image pairs and fine-tuned on thousands of high-quality images.
Meta’s generative AI TOS sheds some light on the situation, explaining that they use all content uploaded to any Meta platform. However, they state they don’t train on private messages. Their opt-out form only allows you to opt out of third-party data collection, meaning data Meta bought from third-party vendors. You cannot opt out of having your Meta data trained on, and even if you opt out of third-party training, you’re still stonewalled.
Here are the rejection messages Meta automatically sends whenever anybody attempts to opt out of third-party AI training.
This inability to control our data has many privacy advocates heated. Meta already proved throughout its 20-year history that it’s willing to bend any rule to make a profit off exploiting user data. And in the age of AI, they clearly intend to continue on this path.
It puts us all in a precarious positions–Meta legally has data on over three billion people. Approximately 2 billion people are under the age of 15, and Meta’s apps have a 13-year-old age restriction on them. This means the platform legally has data of about half the world’s qualifying population without even counting Instagram, WhatsApp, and Threads. And there’s a good chance it also bought data for the remaining half, as leveraging data for advertising is its core business.
We’re heading for a bleak future in which Meta is easily able to exploit the data and hard work of everybody, including creatives, and this makes Meta one of the big five tech companies to protest and boycott, although it may not make a difference.
The only options for social media outside of the walled gardens held by tech giants are open platforms like Bluesky and Mastodon, neither of which provides any privacy protections. The only thing worse than Meta exploiting our data is everyone (including Meta) having access to do so.
Moving Forward with Meta
Meta is one of the largest and most successful companies in the world, dominating the social media market and leveraging our social data to generate massive advertising profits.
However, Meta’s advancements in AI have been met with skepticism regarding their ethical implications. Criticisms range from a poorly implemented opt-out policy to a lack of transparency about data collection and usage. These concerns add to the company’s already controversial reputation as one of the internet’s most exploitative data collectors, raising questions about responsible AI development and data management practices.
Despite the company’s widespread influence and rapid technological advancements, concerns about data privacy, ethics, and potential monopolistic practices continue to cast a shadow over its efforts. Meta’s approach to AI contrasts with other industry players in its openness, but this has not alleviated the overarching ethical dilemmas. As AI continues to be embedded into daily life and Meta’s diverse range of services, the need for responsible practices becomes increasingly urgent.
Related Posts
CivitAI Facilitates the Use of Stolen Intellectual Property
Testing Adobe Content Credentials with Every Photoshop Feature
Stability AI Paints a Pretty Picture of Open-Source That Doesn’t Exist
About The Author
Chief Luddite
Luddite in Chief at Luddite Pro - Keeping up with technology so you don't have to.