Atomic Founder and CEO Niklas Zennstrom, Center for Humane Technology Founder Tristan Harris, Mozilla.org chairwoman Mitchell Baker and Wikipedia and Wikitribune Founder Jimmy Wales participate to a talk on ‘Tech for Good’ during the VivaTech fair in Paris, France, 24 May 2018. PA-EFE/ETIENNE LAURENT

The past two decades have seen spectacular and troubling failures on the part of the technology industry. As we contemplate how to respond to protect everything from our privacy to our personal safety and tackle the proliferation of extremist content online, it is important to understand the growth and greed on the part of the technology sector.

Back in the early 2000s, the availability of child sexual abuse material online exploded. Fueled largely by the quickly growing technology sector, companies were asked to tackle this issue and do something to rein in the abuses on their platforms. Their response was loud and clear: They chose to downplay the problem. Once faced with overwhelming evidence, their messaging changed to claim that there was no technical solution to the problem. A collaboration between myself and Microsoft eventually led to the development of PhotoDNA – a technology that could be used to disrupt the global distribution of child sexual abuse material. Tech’s messaging changed again to now claim that the solution might catch legitimate material in its net and/or it wasn’t clear how or who should define the material to be removed. Only after years of public pressure did the major technology companies relent and deploy PhotoDNA. Despite predictions to the contrary, this technology has been hugely impactful while at the same time not leading to the problems that the technology companies predicted would arise.

Fast forward a few years and the United States found itself in the early stages of what is now an alarming opioid epidemic, which just last year led to over 70,000 deaths in the U.S. This epidemic is the result of a number of factors, but starting in 2003, a significant source of obtaining opioids was from illegal pharmacies advertising on Google and other platforms. After years of being told of the illegal drug trade on their platform, Google failed to put in place the necessary safeguards and, in 2011, the U.S. Department of Justice fined Google $500 million dollars for continuing to allow illegal and dangerous drugs to be advertised on their platform. The investigation found that not only did Google passively allow ads from illegal pharmacies but that for over five years, Google provided customer support to some of these pharmacies to assist them in placing and optimizing their advertisements and in improving the effectiveness of the sale of illegal and deadly drugs.

As the saying goes – a leopard can’t change its spots. Another five years on, it was clear that the internet was being weaponized by hate and extremist groups. The technology giants denied the extent of the problem; claimed that no technological solution exists and that any solution would violate privacy and free speech. They wanted to delay, delay, and delay, until public pressure or threats of legislation forced action. Then, they acted anemically, but just enough to squash criticism or legislation.

Continuing their pattern of indifference, we have most recently learned of the devastating consequences of mis-information campaigns on the largest digital platforms – Facebook, WhatsApp, YouTube, and Twitter. From horrific violence in Myanmar, Sri Lanka, India, and the Philippines, to meddling in democratic elections around the globe, social media has — again — been weaponized. And the response from the industry was all too familiar: Mark Zuckerberg started by saying that it was a “pretty crazy idea” that fake news played a role in influencing the 2016 US elections, only then to admit that a small number of users saw ads from Russian trolls, to admitting that the number was closer to 10 million users, and then eventually admitting that the number was over 100 million users.

From child sexual exploitation to the sale of illegal drugs, extremism, and mis-information, the titans of technology have failed us. They have shown that they are simply incapable of regulating themselves and that they put growth and profit ahead of all else.

While we have seen some progress as compared to a few years ago — thanks in large part to legislative pressure from the EU — there are still significant gaps in the development and deployment of technology to quickly and accurately find and remove the worst of the worst content online. In addition to improving existing technologies, the industry must do more to develop and deploy new technology to contend with an ever-changing landscape — and this must go beyond vague promises of using AI to solve the problems we are facing today. The industry must also institute a clear and consistently applied framework that brings transparency and due process to content regulation.

Given the pattern of behavior that we have seen over the past nearly two decades, the time of trusting the technology industry to self-regulate is long over. Instead, we must continue to pass legislation that thoughtfully contends with the issues at hand while making sure that any legislation does not cripple small startups thus leading to a further enshrinement of the monopoly that is Google, Facebook, and Twitter. We must also pressure advertisers that fuel the industry to be more socially conscious and to wield their enormous power more responsibly, and we as a public and the media need to continue to pressure the Titans of Tech to do better to rein in the abuses on their platforms. Collectively it is our responsibility to help the technology sector find their moral compass that has been all too broken for the past two decades.

Dr. Hany Farid is the Albert Bradley 1915 Third Century Professor of Computer Science at Dartmouth College and a senior advisor to the Counter Extremism Project.