Over two months ago, Microsoft and IBM signed the “Rome Call for AI Ethics,” pledging to the Vatican that their AI will protect the planet and its people.
Nothing has come from the resolution, so far, other than a blog post or two calling for more pledging.
Microsoft and IBM are now bound to “safeguard the rights of all humankind” and ascribe to a “duty of explanation” so we peasants understand not only the function of AI algorithms but their purpose and objectives,” according to the story announcing the deal in the Financial Times. In exchange, their crusades to invent AI are hereby blessed.
History is filled with secular leaders cutting such deals in exchange for the Church’s sanction for fighting a war newly deemed “holy,” or sometimes simply its benign disregard when said leaders’ actions came nowhere near qualifying for such a description. Not sure what agents Microsoft and IBM are fighting against other than their own worse, base instincts.
And none of these agreements are binding, of course. Circumstances change, indulgences are recalculated, and yesterday’s blessed sons and daughters become tomorrow’s apostates. The only mitigating factor is a signatory’s fear of eternal damnation (though that, too, has been negotiable).
Isn’t it odd that there have been no public updates on the pledge?
It would be a rich topic for ongoing narration, from sharing who’s working on what with whom and what they’re discussing, to examples of how the businesses are applying, whether easily or with issues, the emergent ethics precepts to actual activities.
The very idea of “AI Ethics” should be a process, not a solution.
Of course, this premise flies in the face of our newfound love of “purpose” in corporate behavior, which is usually defined and delivered by marketers or management consultants who only speak to the canon of consumer trend research. Whatever is going on with AI shouldn’t fall into the same bucket with mission statements and purpose communications strategies, and certainly shouldn’t be branded. It should reside somewhere else…a place where Microsoft and IBM could and would share real-time interpretive insights into what their AI activities mean to people and the planet.
I’ve come to believe that corporations of all stripes need a C-Suiter in charge of ethics and morality, and this would include oversight of development activities in AI. Think of a Chief Morality Officer who functions like a priest or rabbi, providing operational leadership with perspectives on the implications of their decisions beyond the pale of material measures. I don’t see this position having any enforcement authority, so less Spanish Inquisition and more kindly parish priest who helps families work though the challenges of daily life.
There’s a broad chasm between the Rome Call for AI Ethics and its implementation, and without establishing any sort of translation or application mechanism within the companies, its realization will be left to press releases and other “thought leadership” about its successes that marketers choose to promote.
The resolution binds Microsoft and IBM to do nothing, so it should motivate them to chart new territory and communicate something.
Any failures to do it will doom the companies to hell, of course, so that’ll be some consolation.