Incentives First: A Critique of Current AI Governance Models

0
2103
Photo by Matthew TenBruggencate licensed under the Unsplash license.

Last December, the Federal Trade Commission settled with Rite Aid after alleging that the company had failed to take the most basic precautions against algorithmic bias in their surveillance’s facial recognition technology for threat detection. The Commission stated that an 11-year-old girl was falsely stopped and searched when entering a Rite Aid. While Rite Aid’s facial recognition system produced alerts against an image of “a white lady with blonde hair,” the girl was Black. The FTC banned Rite Aid from using face surveillance for five years and mandated that they delete all biometric data used for training their facial recognition technology. 

The Commission noted that their response to Rite Aid is only the baseline for addressing algorithmic bias, a phenomenon in which AI algorithms discriminate against minorities by drawing from training data that’s biased against these communities. Some prominent harms arising from algorithmic bias include the disproportionate patrolling of Black and Latinx communities and discrimination toward low-income borrowers in credit-underwriting software. 

The situation with Rite Aid demonstrates that while companies may claim to govern their AI applications ethically, this doesn’t necessarily mean that they are instituting necessary safety measures. 

To address these issues, the United Nations released a long-awaited Interim Report titled Governing AI for Humanity this past December. By illustrating ethical principles and potential regulatory mechanisms, the report seems to play a prominent role in setting the precedent for AI regulatory frameworks globally — yet, its proposals represent a larger issue in AI regulation. 

By underscoring multi-stakeholder governance models (MSGMs) as robust accountability mechanisms, current solution frameworks enable companies and governments to cement a disturbing amount of power under the assumption that they are striving to uphold the UN’s human rights frameworks. However, companies and governments are driven by incentives that compete with public safety initiatives, potentially exacerbating socioeconomic inequalities. 

By reshaping the underlying incentives behind big AI companies and national governments, we can develop AI governance models that establish stronger regulatory norms and ultimately mitigate human rights violations more effectively.

A Closer Look at Multi-Stakeholder Governance Models (MSGMs)

In their AI Interim Report, the UN underscores MSGMs as a critical mechanism for upholding the UN’s human rights framework and international laws. By fostering ethical conversations among a variety of parties regarding the specific harms posed by AI innovation to human rights, multi-stakeholder governance models are conceived to tackle pressing issues surrounding AI.

These regulatory approaches are designed to hold the most prominent of AI developers accountable: big tech companies. 

In an interview with The Harvard Political Review, Director Jayant Narayan of the World Economic Forum, an international, non-governmental organization that coordinates multi-stakeholder initiatives, reflected on his experience leading various AI and climate technology initiatives. He noted that those who contribute most toward harm, whether a nation or a big tech company, should be held the most accountable, whether in the realm of climate technology or AI: “Historically, the US and EU have been responsible for most of our emissions, which means that their responsibilities are different from India and China,” Director Narayan said. “Similarly, big tech has a disproportionate responsibility in this case because they control the market.” 

While MSGMs succeed in bringing prominent voices to the table, they are insufficient in ensuring that stakeholders adhere to ethical outlines as they conflate participation with accountability. 

Addressing the Disconnect between the Government, Companies, and the Public Good

To hold companies accountable, the UN’s AI Interim Report states that any viable governance framework shouldn’t rely on the “benevolence of a handful of technology companies” but rather must “shape incentives globally to promote these larger and more inclusive objectives.” Given the immense profits that businesses can attain from the rapidly growing AI market, MSGMs are positioned as a key mechanism for creating binding norms “to ensure that public interests, rather than private interests, prevail.” 

Similarly, the UN also finds it problematic to let countries self-regulate AI in fear of fragmented regulatory policies that fail to prioritize human rights frameworks. This fragmentation is partially a product of the global AI arms race as AI innovation translates to geopolitical power — most disturbingly evident in AI’s direct application to warfare. 

During the Ukraine War, the U.S. coordinated with companies such as Palantir Technologies and Planet Labs to gather geospatial intelligence from satellite imagery to map the movement of Russian troops and identify war casualties. The U.S.’s leadership in AI ultimately proved a political necessity in navigating combat.

Countries are also using AI to bolster their national agendas, as demonstrated by the diverging regulatory frameworks imposed by China, the U.S., and the EU. China heavily regulates its companies to ensure AI applications promote the ideologies of the Chinese Communist Party. For instance, the government installed roughly 626 million cameras for facial-recognition technology in 2020, an AI application that has received immense backlash in the U.S. 

With the looming fear of Chinese competitors like Baidu and Alibaba making headway, the U.S. has deemed Chinese access to critical semiconductor technology a national security issue while adopting a laissez-faire approach to spur private sector growth in AI innovation. 

The U.S.’s fairly relaxed AI regulation starkly contrasts that of the EU. Not only have the EU’s data privacy laws served as an international model, but their most recent AI Regulation Deal has been hailed as a robust, comprehensive regulatory guide for AI. This proposed legislation requires varying levels of transparency from AI companies depending on the risks posed by an AI application, ranging from photo filters (low-risk) to hiring systems (high-risk). In spite of these advancements in regulation, the EU AI Act perpetuates human rights concerns that are often overlooked. 

While the UN believes that MSGMs are the best solution to ensure “that geopolitical competition does not drive irresponsible AI or inhibit responsible governance,” we must strive for more robust regulatory frameworks given that competing national interests and current ethical guidelines have exacerbated global inequalities. 

Data Colonization: The Not-So-Secret Project Behind Current AI Initiatives

Current solutions set norms that severely risk perpetuating data colonization, the invasive action of big AI companies and wealthy countries in exploiting the Global South to experiment with new technologies and gather incommensurable amounts of data for profit. In the chapter “AI and the Global South: Designing for Other Worlds,” Director Chinmayi Arun of the Information Society Project at Yale Law School analyzes the often overlooked effect of data colonization on the Global South. 

Director Arun describes the Global South as populations around the world that are oppressed by capitalism and colonialism, rather than a place enclosed by geographical borders. This broader interpretation of the Global South is necessary to understand how the adverse incentives of companies and nations sustain worldwide disparities. 

Often, tech companies exploit the Global South under the guise of ethical practices by abusing the lack of human rights protections in these regions. For instance, while OpenAI’s ChatGPT has taken the world by storm with its groundbreaking ability to answer a myriad of questions in a human-like manner, the process of making the chatbot’s outputs safer was sustained by outsourcing grotesque labor demands onto Kenyan workers for less than $2 an hour. These workers were tasked with sifting through traumatizing content such as child sexual abuse, suicide, and torture to clean the datasets used to train ChatGPT. 

OpenAI outsourced this labor through Sama, a San Francisco-based firm that provides similar operations for other big tech companies like Google, Meta, and Microsoft. While Sama discontinued their data labeling operations for OpenAI in February 2022, they marketed themselves as proponents of ethical AI initiatives. 

While the UN’s AI Interim Report recognizes that AI companies leverage the Global South within their AI development process, they are too optimistic about the potential of future laws providing remedies and thus pose insufficient accountability mechanisms. Even the most promising of AI-related laws aim to maintain data colonization via private-public partnerships, in which companies and governments collaborate on specific, public-facing projects.

For instance, the EU alongside humanitarian aid agencies collects biometric information from asylum seekers fleeing to the EU. While the intent is to prevent terrorists from entering the country and provide seekers with a means of identification, many are concerned for the seekers’ privacy and human rights, especially since their lack of legal protection excludes them from emerging AI regulation. As the EU is partnering with tech companies to forecast seekers’ movement, we must consider how current MSGMs are setting a precedent that normalizes these conflicting incentives. 

Regulation and Innovation: A Misconstrued Story

Despite our insufficient regulatory efforts, some still argue that they are excessive. This argument frames innovation and regulation as mutually exclusive by assuming that regulation is only concerned with society’s capacity for innovation. However, regulation and innovation complement one another when considering the direction of AI innovation. 

What type of society are we striving for? 

This is the underlying question that can guide innovation and regulation toward benefiting the public. Such is the case as developing countries integrate AI into society. 

Given the unique geopolitical climates of Latin American countries, AI regulation can’t be treated as a one-size-fits-all solution: Simply replicating regulatory frameworks such as the EU AI Act is not feasible. Instead, these countries leverage experimental, public-private partnership initiatives, and regulatory sandboxes to address nation-specific issues with AI in an ethical manner. In an interview with The HPR, AI Policy Expert Armando Guio Español, a lawyer and affiliate of Harvard University’s Berkman Klein Center, discussed his role in advising Columbia’s national government on AI initiatives: 

“Even if you look at the European AI Act, they are promoting sandboxes,” Mr. Español said. “We will see sandboxes and many of these elements for regulatory experimentation and innovation to still be in place while regulations are in place. I think Europe is a good example of that, and many Latin American countries are going through that perspective.”

By implementing stronger accountability mechanisms, MSGMs can play an important role in forwarding innovation for the public good. Thus, I propose four ways we can reshape financial and geopolitical incentives for more effective AI governance models. 

One: Demand For a More Proactive Government

The government must engage in a proactive rather than reactive manner toward AI. Currently, stronger accountability mechanisms will only be implemented after regulators uncover human rights violations, which is ineffective in preventing irresponsible AI practices.

We should also center discussions around the direction of AI innovation. Where do we want society to be five years from now? How can we leverage AI to achieve that? When these normative questions were asked regarding climate change, the government began providing subsidies for renewable technologies and changing consumer norms in favor of green technology. 

The same effect can be achieved by subsidizing a basic standard of AI safety tools for all companies while incorporating vulnerable communities’ input on the effects of AI. Proactivity in governmental operations gears efforts toward addressing public concern and mitigating nationalist incentives. 

Two: Enforce Transparency-Focused Assessments

We must institute auditing and impact assessments enforced by state regulators. While auditing assessments ensure that the “AI black box” adheres to ethical data and algorithmic usage, impact assessments analyze the broader risks and effects posed by an AI system. 

While many businesses might argue that auditing assessments can expose their trade secrets to the public, I argue that auditing assessments are justified so long as the risks determined by preliminary impact assessments warrant further transparency. As illustrated by the FTC’s response to Rite Aid, transparency is essential to responsible AI governance. Moving forward, Rite Aid must outline potential risks to consumers and test its algorithms based on these risks before and after deploying them. Ensuring that AI companies are transparent about their AI management practices is critical to reducing the information asymmetry that forces regulators and the public to blindly trust companies to self-regulate. 

Three: Better Articulate Societal Harms

We must avoid simply reiterating the societal harms posed by AI and, instead, articulate social costs in a manner that engages companies and governments. In an interview with The HPR, Dayle Duran, a Privacy & Legal Counsel at ŌURA, articulated why this shift in verbiage is crucial to addressing conflicting incentives: 

“How could we bridge the gap between what we’re saying is the harm versus the harms that we already acknowledge?” asks Duran. “Being able to quantify harm under a country’s existing framework for recognizing harm can be really helpful. Otherwise, it can be pretty hard to just invent harms and try to get legislators on board.”

For instance, when looking at concerns surrounding company use of generative AI art, we can illustrate how disincentivizing people from becoming artists ultimately weakens the very art-related markets that AI companies operate in. Thus, we should highlight the repercussions of indirect harms posed by AI companies and government policies. 

Four: Push Market Competition Toward Ethical AI

We must leverage market forces to push competition toward socially optimal outcomes. Efforts to mitigate climate change serve as a good example of potential solutions. Through strong government investments in green technology, companies were incentivized to compete in utilizing renewable energy. It’s what gave rise to the electric vehicle industry which increased product options for customers and led powerful companies to effectively self-regulate in hopes of winning over environmentally conscious consumers.

By analyzing the social cost of carbon emissions, economists and policymakers created carbon tax regulations that forced companies to align with socially optimal levels of carbon emissions. Further economic research into the harms posed by AI will inform policy that forces companies and governments to rethink their societal roles. For instance, Professor Daron Acemoglu’s research in the Department of Economics at MIT studies the negative consequences of AI, highlighting the importance of redirecting AI research toward the public good.

As competition in AI intensifies, we must ensure that innovation upholds the public good. While the plethora of ethical guidelines and AI safety initiatives makes it seem as though the risks of these technologies are being adequately addressed, we must consider how companies like Rite Aid are incentivized to overlook algorithmic biases that harm marginalized communities. By addressing the adverse incentives of businesses and governments, we can shift the paradigm of multi-stakeholder governance models from rewarding participation to ensuring accountability. This new paradigm will foster comprehensive regulations that align AI competition with the welfare of the Global South.