Artificial intelligence (AI) is emerging as one of the most transformative technologies of coming years. It is powering autonomous vehicles, enabling algorithms, and altering sectors from healthcare and transportation to retail and national security. The increasing penetration of AI into many aspects of life creates tremendous opportunities for economic development. A project undertaken by PriceWaterhouseCoopers estimates “artificial intelligence technologies could increase global GDP by $15.7 trillion, a full 14%, by 2030.”   

Yet at the same time, there are worrisome concerns about AI’s impact on the economy, governance, and society as a whole. For example, how should we make decisions about emerging technologies? What types of ethical principles are introduced through software programming, and how transparent should designers be about their choices? How can we guard against biased or unfair data used in algorithms?

Emerging technologies raise governance questions about who should decide. In the old order, national governments were thought to be the relevant decision-makers on basic questions about public policy and society. Leaders would pass laws and enact regulations designed to address how innovations from the telegraph and telephone to television and nuclear energy were handled. Nation-states in a Westphalian system were the locus of public debate and deliberation.

Today, however, that system is in decline. Many decisions have migrated from the world of government to private companies. Facebook now can be considered a digital sovereign nation with 2 billion people. The same is true for other large firms with dominant power over particular domains.

Their coders, engineers, and computer scientists make decisions that affect the way people communicate, what information is at their disposal, how they buy products, and the manner in which democracy functions. Few of these decisions are subject to detailed government rules since many countries have taken a hands-off stance on most aspects of technology innovation.

Algorithms embed value choices into program design. As such, these systems raise questions concerning the criteria used in automated decisionmaking. Susan Etlinger of the Altimeter Group has noted that “algorithms aren’t neutral; they replicate and reinforce bias and misinformation.”  Given this situation, it is important to have a better understanding of how they function and what choices are being made.

Depending on how AI systems are set up, they can facilitate the redlining of mortgage applications, help people discriminate against individuals they don’t like, and help screen rosters of individuals based on unfair criteria. The types of considerations that go into programming decisions matter a lot in terms of how organizations operate and affect customers.

The key to getting the most out of AI is having a “data-friendly ecosystem with unified standards and cross-platform sharing.” AI depends on data that can be analyzed in real time and brought to bear on concrete problems. Having data that are “accessible for exploration” in the research community is a prerequisite for successful AI development.

In some instances, certain AI systems are thought to have enabled discriminatory or biased practices.  For example, Airbnb has been accused of having homeowners on its platform who discriminate against racial minorities. A research project undertaken by the Harvard Business School found that “Airbnb users with distinctly African American names were roughly 16 percent less likely to be accepted as guests than those with distinctly white names.”

Racial issues also come up with facial recognition software. Most such systems operate by comparing a person’s face to a range of faces in a large database. As pointed out by Joy Buolamwini of the Algorithmic Justice League, “If your facial recognition data contains mostly Caucasian faces, that’s what your program will learn to recognize.”  Unless the databases have access to diverse data, these programs perform poorly when attempting to recognize African-American or Asian-American features.

Many historical data sets reflect traditional values, which may not represent the diversity of the current situation. As Buolamwini notes, such an approach risks repeating past inequities: “The rise of automation and the increased reliance on algorithms for high-stakes decisions such as whether someone get insurance or not, your likelihood to default on a loan or somebody’s risk of recidivism means this is something that needs to be addressed. Even admissions decisions are increasingly automated—what school our children go to and what opportunities they have. We don’t have to bring the structural inequalities of the past into the future we create.”

Darrell M. West is Vice-President and Director of Governance Studies at the Brookings Institution, where he holds the Douglas Dillon Chair. He is Founding Director of the Center for Technology Innovation at Brookings and Editor-in-Chief of TechTank.