Who has responsibility when AI is running the show?

Paula Boddington, Oxford University researcher and author of “Towards a code of ethics for artificial intelligence,” guides us through some of the complex ethical issues raised by AI’s increasing presence in our lives, its impact on our value system, and why we need to think hard about what could go wrong before adopting it wholesale.

With accelerating advances in artificial intelligence, the world of AI is weaving itself — almost imperceptibly at times — into the fabric of our lives, affecting how we move and live and relate.

But as AI extends human agency, intelligence and action in the world — sometimes replacing human beings entirely for certain tasks — it raises increasingly complex issues of ethics and responsibility.

Paula Boddington

Paula Boddington, senior researcher at Oxford University’s Department of Computer Science, tackled the subject head on, addressing a packed audience at the Mobile World Congress in Barcelona last month. For a topic that presents many more questions than answers, she offered a glimpse into how technology brushes up against philosophy.

“The power of AI is pushing us towards questions about the very limits and grounds of our human values,” Boddington says. While it can be potentially of enormous benefit, “does it extend our reach beyond what we can really handle. Does it push us beyond where we really want to go? Can we really be responsible for all aspects of such developments?”

The question of responsibility is at the heart of the ethics issue. AI is potentially a large disruptor of notions of responsibility. One key aspect of responsibility — indeed, a precondition — is control, Boddington says. “AI presents us with … possibly intractable questions about control. If AI is autonomous do we always have complete control and do we always know what’s going on?”

Black box technologies and machine learning — two elements of AI — can be hard if not impossible to explain, she says. “If AI is acting autonomously without direct control — and especially if it’s not completely clear to human creators and users how decisions and actions are reached — how can we as humans remain accountable to others?”

Some computer scientists are calling for open inspection of algorithms as one solution to this dilemma.

Either way, disentangling cause and effect are much harder in the age of AI, and yet they are key to defining responsibility.

At the other end of the AI disruption is the possibility that its presence could diminish our responsibilities altogether. “It could, if we’re not careful, start to erode human autonomy,” Boddington says. “We could allow it to replace human thought and decision in places where we shouldn’t.”

Prompted by exactly these types of concerns, last year the Future of Life Institute (whose motto is “Technology is giving life the potential to flourish like never before….or to self-destruct. Let’s make a difference”) came up with 23 principles for beneficial AI.

Principle №9 reads: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse and actions, with the responsibility and opportunity to shape those implications.

While much focus is on blurring the lines of responsibility, one example of AI’s potentially clarifying role, Boddington says, is the use of algorithms in sentencing decisions in courts, despite the controversy surrounding the practice. Since the public has a right to know how a decision is reached in the legal realm, an algorithm can pinpoint precisely the reason for a sentence, whereas a judge might be less able to do so. The caveat, of course, is that the algorithm generates an answer from examining data, and if the data has any history of bias that will reflect in the sentencing. (Similar warnings have been sounded about inadvertently replicating sexism in AI through biased data.)

So what is the responsibility of a company that buys black box components from another company? Boddington asks.

The question hangs, because trying to identify responsibility when we don’t control the technology and can’t predict the outcome is outside the framework of how we understand human responsibility.

The implications of that conundrum are significant. Adapting to AI, Boddington says, may have “a very deep impact upon our system of values.”

Another impact is the speed at which we adapt to AI. “Our habituation to technology can change what we think of as being normal or expected, and this in turn can change how we understand and attribute responsibility.”

The new normal is that we adapt quickly, even to things that ultimately aren’t to our benefit, she says, and our judgement shifts accordingly.

“We need to think carefully about the power of AI to perhaps dazzle and entrap us, AI is often glitzed up in ways to charm us into doing things we might not otherwise have done. Is it possible we could retain human control in principle but actually be seduced into doing what the AI suggests?”

She cites the example of AI being used in advertising to make it more effective in manipulating human behaviour. The famous “learning experiments” of Stanley Milgram from the 1960s revealed how easily people could be influenced, even to the point of cruelty. Volunteers were asked to administer electric shocks to subjects when they gave the wrong answer, bit by bit increasing the power until the shocks (if they had been real) would have been fatal.

Boddington asks: “Is this how we’re getting use to technology and AI taking over our lives?”

She is not anti-technology, she says, but strongly in favor of thinking about what might go wrong.

The Milgram experiments revealed a vulnerability worth noting, she says. When the volunteers expressed doubt, the experimenter simply repeated calmly: “The experiment must continue.”

“This tiny little polite prompt was enough,” Boddington says.

What Milgram later concluded is that “the scientific, technical surroundings were enough to persuade completely ordinary people to act beyond their consciences,” she says.

“Are human beings particularly prey to falling to technological masters?” she asks somewhat ominously. Hollywood would answer in the affirmative.

But movies aside, as AI becomes more deeply embedded in our social and cultural structures it has ramifications, even for those not directly using the technology.


Sign up to our newsletter to receive the latest Cities of the Future news. You can also follow us on Twitter and Facebook.


Posted

in

,

by

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *