Charting a Course: Business Leaders Set the A.I. Agenda

Before The New Work Summit in February 2019 — a conference that assesses the opportunities and risks as artificial intelligence accelerates its transformation across industries — The New York Times asked leaders who were participating to answer their choice of questions about technology. Their responses have been edited and condensed.

What is the single most important step the tech community can take to instill trust in artificial intelligence?

In 2013, when I was the D.O.D.’s No. 2 official, I issued a policy directive on autonomous weapons that is still in force. The U.S. takes its values to the battlefield, and the directive says that a human must be involved in and responsible for decisions aided by A.I. to employ lethal weapons. The same moral compass should govern commercial applications of A.I., such as credit ratings, prison sentencing and privacy. Accountability requires transparency. After a lifetime in technology development, I believe that transparency for the sake of accountability must be a technical design requirement for A.I. algorithms.

Ashton B. Carter
Director, Belfer Center for Science and International Affairs, Harvard Kennedy School; 25th secretary of defense

Rana el Kaliouby

Co-founder and chief executive, Affectiva

Today, A.I. is people-blind. Sure, it can “see” us, but it’s completely unaware of how we’re feeling. And as we interact with technology, all of the nuances and richness of our feelings disappear in cyberspace. That’s a big barrier when it comes to building trust and communication — not only between people and A.I., but with other people, too. How can we trust in technology that strips us of what makes us human? A.I. needs to be able to understand all things human, and truly relate to us, if we’re going to trust it. This is especially important as A.I. takes on new roles in society, and our interactions with A.I. become more relational and personal. But trust is a two-way street. A.I. needs to be able to trust people, too — to be good co-workers, make the right decisions and use the technology ethically. This is only possible if A.I. can understand the emotions and expressions that are core to who we are.

Danika Laszuk

General manager, Betaworks Camp

The tech community needs to begin to take the ethics of the technology they create more seriously. For example, growth in A.I. and 5G technologies will begin to create new markets and, likely, entire industries once they are developed further. New web technologies like blockchain have the potential to create a more democratic and safer internet, in which users have more control and accountability is distributed more equitably. The recent rise in synthetic media and augmented reality tools is both exciting and a reason to take a step back. Through digitally created Instagram models like Lil Miquela, the consuming world of Fortnite and the normalization of people communicating regularly with nonhuman tools like Alexa, a synthetic reality has been born in which our human experience is enhanced, augmented and challenged by our interactions with intelligent machines.

Tristan Harris

Co-founder and executive director, Center for Humane Technology

The single most important way to instill trust in A.I.s is to align their business models with the ecology of stakeholders, and by ensuring that they operate through internalizing possible risks and harms into their decision-making. Techno-utopians frequently make the mistake of imagining benevolent A.I.s that mysteriously pop out of labs to solve our problems, while ignoring how the business models of the companies that created them are constrained by bad incentives. If Facebook’s negative impacts over the last year have taught us anything, it’s that we should never underestimate the damage created by misaligned business models.

Dov Seidman

Founder and chief executive, LRN

The business of business is no longer just business. The business of business is now society. The world is fused, and we can no longer maintain neutrality. Therefore, it is inescapable to take responsibility for what technology enables and how it’s used. Restoring trust will take more than software. We need to scale “moralware” through leadership that is guided by our deepest shared values and ensures that technology lives up to its promise: enhancing our capabilities, enriching lives, truly bringing us together, and making the world more open and equal. This means seeing more than “users” and “clicks” but real people, who are worthy of the truth and can be trusted to make their own informed choices.

Meredith Whittaker

Co-founder and co-director, AI Now Institute

Meredith Whittaker of the AI Now Institute, discusses electronic addiction at the New Work Summit in Half Moon Bay, Calif., on Feb. 26, 2019. (Mike Cohen/The New York Times)

It’s hard to trust what you can’t see or validate. A.I. technologies aren’t visible to the vast majority of us, hidden behind corporate secrecy and integrated into back-end processes, obscured from the people they most affect. This, even as A.I. systems are increasingly tasked with socially significant decisions, from who gets hired to who gets bail to which school your child is permitted to attend. We urgently need ways to hold A.I., and those who profit from its development, accountable to the public. This should include external auditing and testing that subjects A.I. companies’ infrastructures and processes to publicly accountable scrutiny and validation. It must also engage local communities, ensuring those most at risk of harm have a say in determining when, how or if such systems are used. While building these cornerstones of trust will require tech community cooperation, the stakes are too high to rely on voluntary participation. Regulation will almost certainly be necessary, as what’s required will necessitate major structural changes to the current “launch, iterate and profit” industry norms.

Adena Friedman

President and chief executive, Nasdaq

Adena Friedman, president and chief executive of Nasdaq, speaks at the New Work Summit in Half Moon Bay, Calif., on Feb. 26, 2019. (Mike Cohen/The New York Times)

As the saying goes, trust is earned, not given. Our experience in the capital markets has demonstrated that transparency is one of the keys to trust. To build trust, those applying A.I. to create new capabilities should consider how much to share — with their clients and other stakeholders — about the inputs used, logic within and ultimate outputs from their machine learning tools. The goal to gain trust should be to demystify the process of creating the new capabilities, not to treat it like a new magic that clients cannot comprehend. My view is that A.I., if provided with transparency, will ultimately allow all industries to leverage the best of humans and machines together to create better, safer and smarter solutions for customers.

Deep Nishar

Senior managing partner, SoftBank Vision Fund

Deep Nishar, senior managing partner of SoftBank Vision Fund, speaks at the New Work Summit in Half Moon Bay, Calif., on Feb. 26, 2019. (Mike Cohen/The New York Times)

A.I. systems should assist humanity, not threaten it — enabling us to offload cumbersome tasks in favor of more meaningful activity. Intelligent systems are already helping us discover new eco-friendly materials, assisting doctors with care decisions, making factory production lines more efficient and accelerating drug discovery. Our outlook should be one of optimism, not fear.

Sara Menker

Founder and chief executive, Gro Intelligence

Sara Menker, the founder and chief executive of Gro Intelligence, speaks at the New Work Summit in Half Moon Bay, Calif., on Feb. 26, 2019. (Mike Cohen/The New York Times)

Trust can only be built if there is transparency and accountability. The tech community needs to come together and create frameworks that allow for transparency while respecting I.P. and the ability for models to learn, evolve and continuously change. Transparency is step one to accountability, and accountability is critical especially in domains such as health, food and education where A.I. can also be transformational.

David Limp

Senior vice president, Amazon Devices and Services

People are often skeptical of things that are new, and artificial intelligence fits that definition. I am an optimist, though, and history has shown us that new technologies, on balance, have been incredibly good for society measured by productivity, wellness and equality. The most important thing we can do to address this question is to apply A.I. in ways that solve real problems in customers’ lives. Practical, everyday uses of A.I. will help people see it as a tangible, positive force rather than an abstract or ominous thing. As an industry, we should come together to define standards and controls that ensure these algorithms are free from bias, built with privacy in mind and used in a way that puts the customer first.

Reid Hoffman

Co-founder and executive chairman, LinkedIn; partner, Greylock Partners

It’s important to significantly fund research and development on A.I. safety — so that the outcomes from A.I. will have very positive outcomes for humanity and contained risks. A.I. safety will include transparency on algorithms and processes. A.I. safety will include techniques for understanding the justice and fairness of data sets used to build machine learning. And A.I. safety will have a good sense of the parameters of operation of the machines. The industry should work together on A.I. safety, to maximize the outcomes for the world.

Sebastian Thrun

Chief executive, Kitty Hawk

Sebastian Thrun, chief executive of Kitty Hawk, speaks at the New Work Summit in Half Moon Bay, Calif., on Feb. 26, 2019. (Mike Cohen/The New York Times)

We need to communicate. The tech industry can develop technology, but it’s all of society that has to find the right use for this technology. Government matters, as do regulators, NGOs, unions and workers. The more openly we talk about this, the better situated we will be to make the right decisions.

Who should be responsible for retraining workers displaced by A.I. and other types of automation?

Bridget van Kralingen

Senior vice president, IBM

The most effective way to bridge the skills divide is through innovative new partnerships between governments, businesses and educators. For example, IBM has changed our paradigm for hiring to accommodate “new-collar skills” that can be gained through vocational programs instead of four-year colleges. While new technologies like A.I. are transforming every job in every industry, that does not equate to all workers being displaced. What it does mean is that all businesses and professionals will require a mind-set for change.

What is one piece of advice you would give a recent college graduate entering the job market today?

Jeremy King

Executive vice president and chief technology officer, Walmart

That’s an easy one: Don’t push to become a manager too quickly. It’s important to become a master in your craft, and rounding out your experience first is much more important. I have seen so many technologists jump to leadership positions too early in their career, and therefore they cannot go deep on the technology stacks they are leading and, in some cases, aren’t even trained on it. A good leader today is both technical and a good people leader — and that comes with time and experience.

Cindy Mi, founder and chief executive of VIPKid, speaks at the New Work Summit in Half Moon Bay, Calif., on Feb. 26, 2019. (Mike Cohen/The New York Times)

If you were 18 and graduating from high school today, what would you do to be as employable as possible?

Cindy Mi

Founder and chief executive, VIPKid

I hope students today will cultivate a global mind-set and a passion for lifelong learning. They should seek out opportunities to connect with people around the world, and understand that while our world is getting smaller through technology, the opportunity to learn from people and cultures across the globe has gotten so much bigger. At VIPKid, we believe young children will become global citizens and have an impact beyond their own borders when they have access to personalized learning and compassionate teachers who instill in them this global mind-set.

Evan Spiegel

Co-founder and chief executive, Snap Inc.

I feel that the most important thing we can do is learn how to tap into our creativity. When we overcome the fear of expressing ourselves and our ideas, anything is possible!

Evan Spiegel, co-founder and chief executive of Snap Inc., speaks at the New Work Summit in Half Moon Bay, Calif., on Feb. 25, 2019. (Mike Cohen/The New York Times)
Former Treasury Secretary Lawrence Summers, the Charles W. Eliot University professor and president emeritus of Harvard University, speaks at the New Work Summit in Half Moon Bay, Calif., on Feb. 26, 2019. (Mike Cohen/The New York Times)

Lawrence H. Summers

Charles W. Eliot University professor and president emeritus, Harvard University; 71st secretary of the Treasury

I believe that the ability to manipulate, comprehend, analyze, collect and refine data of all kinds will be central to the success of individuals and organizations in the 21st century. It is absurd that high school students are taught trigonometry but not statistics, and physical science but not social science. Coding is important, but understanding the data handling that is the reason for coding is even more important.