Who Are the AI Police? A look at the Emerging Efforts to Regulate AI, and the Involvement of Tech Giants

Google+ Pinterest LinkedIn Tumblr +

Every powerful human invention has been accompanied by risk. From the early humans’ domestication of fire thousands of years ago, to the domestication of nuclear power a few decades ago, the tools bringing great power to human kind very often also bring great dangers. This is just as true for the development of Artificial Intelligence (AI).

Last week a group of researchers from OpenAI, University of Cambridge, the Future of Humanity Institute, and others, released a 100-page report detailing the dangers of AI developments and the need for AI policy. The paper warns against future attacks through drones, self-driving cars, spear-phishing, and the spread of misinformation. In parallel to this work, NewtonX assembled a panel of senior experts on AI regulation and control including members of world-leading think tanks and former government official and policy makers from the U.S., Europe, Russia and China. In light of the recent Russian interference in the U.S. election (in the shape of misinformation spread via bots and troll farms, as well as through the data breach of the Clinton campaign) the report is an apt examination of the ways in which AI can be used maliciously both by individuals and institutions.

Currently, the dangers AI poses are not those of robot warlords, but rather an uncertain time wherein increased reliance on technology makes us vulnerable to unforeseen attacks. A former executive in IBM’s AI division and member of the NewtonX AI regulation panel declared, “To counteract forces that would use AI maliciously, we will need to preempt attacks and fortify security, both at the governmental level and at the enterprise level.”

The dangers #AI poses are not those of #robot warlords, but rather the threat of unforeseen #CyberAttacks Click To Tweet

The Big Players: Who’s Developing Policy to regulate AI

While the U.S. government has been relatively mum on the subject of AI policy (most likely due to security concerns), many of the largest tech giants have developed Think Tanks and projects devoted to AI research and investigation. The 5 most influential forces in this effort to shape AI regulation are:

OpenAI

Founded by Elon Musk, who recently parted from the organization, OpenAI is a not-for-profit research institution devoted to finding safe ways to coexist with AI. The institution has done things like created proof of concept simulations that can learn tasks after seeing them done once, developed reinforcement learning algorithms, and released GPU kernels for sentiment analysis.

Future of Humanity Institute

The Future of Humanity Institute is a research center at the University of Oxford, devoted to investigating forces that affect the existence of humanity. One of these forces is AI. The institute publishes papers examining how to safely train real world applications of AI through reinforcement without endangering human life, predicting when AI will exceed human performance, and the implications of openness of source code, science, data, safety techniques, capabilities, and goals in the development of AI, among other topics.

Centre for the Study of Existential Risk

This Cambridge University research center is devoted to the study of forces that could lead to human extinction. The center has been used as a resource for international policy decisions, and is focused on first identifying risk, and then proposing solutions to mitigate risk.

Center For Human-Compatible AI

Based out of UC Berkeley, this institution publishes research on how to harness the power of AI, while ensuring that humans remain safe. The center posits that AI research should be concerned with provably beneficial systems rather than arbitrary objectives where it could evade human control.

DeepMind

Google’s AI company DeepMind is largely to devoted to the development of commercial applications of AI, such as its recent retinal scan that can identify cardiovascular risk as accurately as a blood test can. But it also devotes ample resources to research, including its famous AI Go player, named AlphaGo.

What NewtonX Experts Say About AI Policy Efforts

NewtonX’s AI policy and regulation panel includes former researchers and heads of AI programs at large tech companies (IBM, Microsoft, Apple, and more), former policy-makers and government officials having worked on AI-related initiatives, as well as academics from universities renowned for their AI work (Cambridge, MIT). The intent of this panel was to create a resource to understand how the world is currently thinking about AI-related risks, what initiatives are needed for regulating AI and controlling its risks, and finally understanding the relationship between corporations developing AI for profit, and not-for-profit organizations thinking about the risks of AI at a societal level. The three most salient findings were:

  1. Companies respect these think tanks, but are more concerned with profitability of AI systems than with future political ramifications
  2. Despite this, because the think tanks will eventually inform policy decisions, which in turn affects profitability, enterprises are monitoring AI thought leadership and research
  3. Companies are interested in investing in security solutions for bots, the spread of misinformation, and IoT

The research that the think tanks do, while not directly impacting enterprise endeavors, will impact the viability of market penetration for future AI-based products, from AI-enabled IoT to intelligent cybersecurity. The concerns raised by these think tanks over security, including “novel attacks that exploit human vulnerabilities (e.g. through the use of AI for impersonation, identity theft etc.), existing software vulnerabilities (e.g. through automated hacking), or the vulnerabilities of AI systems (e.g. through adversarial examples and data poisoning)” indicate a growing need for proprietary, sophisticated security systems. According to the Corporate side of the NewtonX AI regulation panel, this will be a major enterprise investment over the next five years.  

When The Preparation Will Turn To Reality

History has shown that before we fully implement controls to regulate new technologies, there’s usually a disaster or two that mobilizes us a society. A senior NewtonX expert and former U.S. government official compares the need for regulation to international regulation for nuclear programs. Indeed, there were numerous close calls during the Cold War, in which either American or Soviet detectors falsely indicated that a nuclear attack from the enemy was incoming. Since the advent of nuclear weapons, there have been 32 incidents of ‘Broken Arrows’ — or, accidents involving nuclear weapons that result in the launching, firing, detonating, theft or loss of the weapon. These mistakes have become less and less frequent as we’ve implemented safeguards and regulations to mitigate the possibility of disaster.

The Russian interference in the U.S. election will likely go down in history as one such disaster. That said, because it has been highly politicized, we may still be several years out from a true security overhaul. According to our expert panel, the most important issue with the current state of AI policy is that there’s a knowledge gap between policy makers and technologists. To implement effective security measures at the governmental and regulatory level, politicians, lawmakers, and the people who are actually developing dual-use technologies will need to open up channels of dialogue.

For now, though, we are likely looking at a few more mistakes before widespread AI policy regulation.

There will likely be a few more #AI mistakes before widespread regulation is implemented. Click To Tweet

The data and insights in this article are sourced from NewtonX experts. For the purposes of this blog, we keep our experts anonymous and ensure that no confidential data or information has been disclosed. Experts are a mix of industry consultants, employees of the company(s) referenced, former government officials, and academics.

Share.

About Author

Germain Chastel is the CEO and Founder of NewtonX.

Comments are closed.