Google asked the EU last week to forgo drafting new laws to regulate AI, citing the preponderance of existing rules that might already apply and the risk of creating impediments to innovation by small and medium-sized businesses.

It’s a version of the “we don’t know enough about what’s going to happen with new technology so we should let it play out before we tinker” argument that was made quite successfully in the early days of social media technology.

Look where that got us.

The key to coming up with good answers is to identify the right questions, and Google is absolutely correct when it argues that those laws and regulations that might already apply to AI are “technology neutral in nature,” so they don’t risk creating bad solutions to incorrect or incomplete assessments of novel tech.

But we know enough now that posing the AI challenge solely as “a technology” is a recipe for disaster.

Google was responding to a 45-page white paper published by the EU in mid-February, in which the Commission discussed various challenges and opportunities presented by AI (from how companies get equal access to learning how to use it, to how consumers need to be kept informed and empowered so that they’re not simply used by it).

Nowhere does it specify whether or not using AI is even a good idea. It’s like a white paper that assumes loaded guns are going to be handed to small children, and asks only that manufacturers and distributors work together to come up with ways to limit the threat of “unintentional” injury.

In this context, the feedback from Google is that it’s very supportive of the effort, as long as it doesn’t create limitations that might one day prove to have overly impeded the rollout.

Google isn’t alone in its reticence; a group of associations, businesses, and thinkers with a vested interest in building and selling AI called Digital Future for Europe was even more blunt in its conclusions that the EU might “stifle” AI innovation.

I just don’t get it.

On one side of the debate are the experts and businesses who want to develop AI and readily admit that they don’t know all of what that development will entail. On the other side are government bureaucrats who might be expert at creating regulations but must admit that they don’t fully know what the hell they’re doing, perhaps not as openly as their opponents.

The rest of us are left out of a debate over a phenomenon that promises to utterly upend many or all of our preconceived notions and truths about business, society, and our very definition of consciousness and what it means to be human.

The EU’s white paper almost admits as much, especially in Section 5. A, entitled “Problem Definition,” where it notes the wide-ranging implications that AI may have for how we work and live. But there’s no teeth to how that problem will be resolved, and the rest of the paper is really focused on establishing and framework in which technologists and bureaucrats can work together to blow up the world as we know it.

We need to restate the question, perhaps even embrace Google’s preference for “technology neutral” debate and recast the development path for AI oversight in terms of morality, ethics, even philosophy. Where are the theologians in the debate? What about the analyses of economists who aren’t overt believers in unrestrained capitalism? Maybe some insights from cultural anthropologists on how so-and-so AI innovation will impact human experience might come in handy?

Better yet, where’s the meta-committee, elected openly and operating entirely transparently, that has the authority to approve or nix one initiative or another? How about elevating certain high-impact decisions to public scrutiny and even a referendum?

AI innovation in unexpected ways may be inevitable but unintended consequences aren’t, at least not entirely so. Unfortunately, there’s no reason to believe that technologists and bureaucrats will do anything more than enable the latter because they’ve already agreed on the former.

Leave a Reply

Your email address will not be published. Required fields are marked *