Social Media’s Mistakes: Can AI Avoid Them?

Natalio Villanueva

2 min read ·

SHARE

Social media’s unchecked rise over the past decade should serve as a cautionary tale as artificial intelligence (AI) technologies go mainstream. The same fundamental attributes that allowed social platforms to cause societal harm – prioritizing advertising, enabling surveillance capitalism, amplifying viral content, entrenching user lock-in, and permitting monopolization – are already manifesting in the AI sector.

The risks are considerable if these dynamics play out similarly with AI. Pervasive advertising could lead to manipulative AI-powered marketing and monetization of private conversations. Personalization for targeted ads may usher in ubiquitous surveillance far beyond current data harvesting. AI capabilities could supercharge the spread of misleading viral content and disinformation. User reliance on AI assistants that become entrenched in daily life may create insurmountable lock-in costs. And network effects combined with a lack of data portability could clear the way for a handful of AI monopolists.

Fortunately, we have been forewarned by social media’s negative impact on mental health, political discourse, privacy, and competition. Applying those lessons, governments should act decisively to regulate AI development from the outset. Restrictions on unethical deployment, transparency mandates, and oversight regimes with accountability can uphold public interests. Crucially, robust antitrust enforcement and enabling data portability standards can promote competition and forestall monopolies.

Additional measures could include limiting AI providers’ ability to serve manipulative ads akin to existing regulations on tobacco advertising. Another option is publicly funded, open-source AI tools developed transparently under democratic governance and without profit motives distorting design incentives. While the optimal policy mixture is debatable, some combination of these approaches is vital to avoid repeating social media’s mistakes on a broader scale.

The biggest mistake was allowing social media to remain unregulated and free-for-all despite mounting evidence of harm. Though the U.S. has yet to take substantive action as the 2024 election looms and crises persist worldwide, it is not too late for course correction on AI, given its still-nascent consumer adoption. AI’s potential upsides are tremendous, but so are the risks if allowed to evolve ungoverned like social media.

Applying the hard-learned lessons from social media’s damaging trajectory, we must proactively regulate AI development now. Technology is not inherently good or evil – just a powerful tool that will be shaped by the incentives and rules we define for its creators and purveyors. We still have a narrow window to get those incentives and regulations before advanced AI applications become ubiquitous and entrenched. Willfully abdicating that responsibility would prove disastrous.

This perspective comes from Nathan E. Sanders and Bruce Schneier’s thorough analysis for WIRED, “Let’s Not Make the Same Mistakes With AI That We Made With Social Media.” Published on March 13, 2024, their piece provides an in-depth examination of the parallels between AI’s emerging dynamics, the harmful attributes that plagued social media, and a range of policy remedies to get ahead of potential AI pitfalls.

Relevant Tags

Natalio Villanueva