Twenty years have passed since renowned Harvard Professor Larry Lessig coined the phrase “Code is Law”, suggesting that in the digital age, computer code regulates behavior much like legislative code traditionally did.  These days, the computer code that powers artificial intelligence (AI) is a salient example of Lessig’s statement.  Systems rely on AI to detect credit card fraud, administer social benefits, enhance personalized medicine and even determine criminal sentencing.  The profound policy implications of AI require legislators and regulators to inject laws and ethics into computer code, reversing Lessig’s maxim to position law as code. Everyone from academics in conferences to Hollywood producers is considering what the AI revolution means for technology, business and society.  Optimists portray AI as a silver bullet to problems ranging from national security and bank fraud to personalized healthcare and online content moderation.  Skeptics lament black box algorithms driving fateful decisions about individuals’ credit, education and even liberty.  In the years ahead, how should policymakers address the AI disruption, optimizing the benefits of advanced pattern matching and deep learning platforms while minimizing the risks to individual rights?  We suggest that regardless of policy stance, the advent of AI will require policymakers and regulators to develop the technological savvy and expertise necessary to assess, oversee and review automated decision making and machine learning systems.

Although some of the emerging business models may be more hype than reality, there are examples of machine learning driving advances in almost every sector.  AI is broadly understood as a new generation of machine driven processes that are modeled after the intricate workings of the human brain, including cognitive learning, problem solving, speech and language recognition, image identification, and more.  Technological visionaries foresee machines and algorithms that through iterative processes improve themselves, leading to exponential increases in capabilities that with time far surpass the abilities of humankind culminating, perhaps, in a self-aware singularity.   

Utopian or — depending on point of view — dystopian visions aside, AI already presents formidable challenges for policymakers and regulators, particularly in the privacy and data protection space:

• Good AI requires sound data.  One of the principles,  some would say the organizing principle, of privacy and data protection frameworks is data minimization.  Data protection laws require organizations to limit data collection to the extent strictly necessary and retain data only so long as it is needed for its stated goal.  In many but not all cases, AI requires large datasets, unleashing machine learning processes on piles of data to discern patterns invisible to humans.  Data protection regulators will need technical expertise to assess the extent and ways that organizations can maintain data sets in a manner that respects individual rights.  They will also need to understand the workings of new technological methodologies for privacy protective data analysis.  For example, with differential privacy, organizations inject noise into data analysis queries allowing researchers to survey datasets without compromising any individual’s privacy.  With homomorphic encryption, researchers conduct calculations and analysis on data in encrypted form.  These new methodologies require highly technical expertise — in mathematics and statistics, encryption and computer science — which is outside the typical realm of policymakers and lawyers.        

• Preventing discrimination – intentional or not.  In the United States, one of the stated reasons for instituting automated credit scores in the early 1970s was to rid credit decisions of social biases and discrimination.  Alas, automated systems too are not immune to bias.  Digital discrimination can be borne out of skewed data, faulty decision making algorithms, or malicious intentions.  Ridding automated systems of bias requires policymakers to establish coherent theories of discrimination.  When is a distinction between groups permissible or even merited and when is it untoward?  How should organizations address historically entrenched inequalities that are embedded in data?  New mathematical theories such as “fairness through awareness” enable sophisticated modeling to guarantee statistical parity between groups.  Here too, statistics and math augment social theories and legal policies to encode morals in a digital age.               

• Assuring explainability – technological due process.  In privacy and freedom of information frameworks alike, transparency has traditionally been a bulwark against unfairness and discrimination.  As Justice Brandeis once wrote, “Sunlight is the best of disinfectants.”  European data protection and U.S. credit reporting regulations require organizations to provide individuals insight into the parameters driving decisions about them in areas ranging from credit and insurance to employment and education.  Yet increasingly sophisticated machine learning processes become opaque almost by definition.  Deep learning means that iterative computer programs derive conclusions for reasons that may not be evident even after forensic inquiry.  Indeed, with the human brain, which itself harbors deep scientific mysteries, as a gold standard, no wonder that even programmers who code systems increasingly struggle to understand their output.  University College London professors Lilian Edwards and Michael Veale expressed concern that the search for a “right to explanation” in Europe’s General Data Protection Regulation (GDPR), “may be at best distracting and at worst nurture a new kind of ‘transparency fallacy’.”

Oxford philosopher Luciano Floridi wrote that “Ours is a world of digits that must be read through computer science”.  Yet even with code as law and a rising need for law in code, policymakers do not need to become mathematicians, engineers and coders.  Instead, institutions must develop and enhance their technical toolbox by hiring experts and consulting with top academics, industry researchers and civil society voices.  Responsible AI requires access to not only lawyers, ethicists and philosophers but also to technical leaders and subject matter experts to ensure an appropriate balance between economic and scientific benefits to society on the one hand and individual rights and freedoms on the other hand.

Jules Polonetsky serves as CEO of the Future of Privacy Forum, a non-profit organization that serves as a catalyst for privacy leadership and scholarship.

Omer Tene is Vice President and Chief Knowledge Officer at the International Association of Privacy Professionals.