U of T's Mark Kingwell to give a lecture on AI and ethics at U.K. alumni event
It’s difficult for most of us to imagine a world where artificial intelligence, or AI, researchers succeed in populating the planet with intelligent non-human beings. Will they be our servants? Our friends and fellow citizens? Or will they, as Hollywood would have us believe, pursue a more sinister path?
Mark Kingwell, a professor of philosophy in the Faculty of Arts & Science at the University of Toronto, thinks deeply about these sorts of questions – though, for the record, he’s not worried about becoming robot fodder – and wishes more people would do the same.
“There’s a great deal of money and attention being paid to what can be done with these algorithms," Kingwell says. "But there’s much less money and attention being paid to reflecting about what it means for us as humans, as well as culture and society, going forward.
”I don’t think it's presumptively negative, but I do think it needs to be part of the conversation.”
Kingwell, who says he first became interested in such science fiction-sounding scenarios as a teenager, will deliver a presentation titled "Humans and Artificial Intelligence – What Happens Next?" on Jan. 22 in London. The event, part of an ongoing series of lectures for U of T alumni hosted in cities around the world, will be hosted by U of T President Meric Gertler.
While Toronto – and U of T – is increasingly recognized as a hub of AI innovation thanks to initiatives like the Vector Institute, Kingwell's talk demonstrates the breadth of multidisciplinary talent U of T brings to the table when it comes to understanding the potentially far-reaching impacts of AI technologies, which are poised to reshape industries ranging from medicine to transportation.
Mark Kingwell, a U of T philosophy professor, says it's time to have a "humanistic" conversation about AI development (photo by Colin McConnell/Toronto Star via Getty Images)
So what issues does Kingwell see on the horizon?
At some point, he argues, we may have to decide whether AI-powered “beings” deserve rights and respect – a recognition we've historically been slow to bestow upon other groups, including, in the North American context, women and people of African descent.
“It’s not encouraging,” Kingwell says of humanity’s collective track record in this area.
It’s a criticism that’s already being aimed at Saudi Arabia, which in the fall granted an interactive robot named Sophia citizenship. The move was widely dismissed as a publicity stunt by a country accused of mistreating migrant workers and waiting until this past September to lift a ban on female drivers.
Kingwell acknowledges that it will probably be a long time before we encounter truly autonomous beings. But he stresses that ethical conundrums posed by automation and AI – which can already outperform humans in certain tasks – are already knocking on the door.
“I’m flying to London on a very smart plane,” he notes. “It flies itself and can land itself. But what if the human pilot decides to override all that, is that actually okay?”
Medicine is another area where technological advances, including the use of AI to detect disease and evaluate treatment options, is bound to raise difficult questions. Asks Kingwell: “Will it be making life and death decisions that we’re not comfortable with? Do we want humans to make those decisions? I’m not sure.”
If the idea of granting a machine rights sounds crazy, Kingwell assures it’s not.
“Any entity that has the rational capacity to make an ethical choice – that grants them a presumptive status and we have to recognize that,” he argues.
And what if those sentient robots decide to do things we don’t want them to?
Kingwell likens the dilemma to parents who have children: “You create a child – a biological entity – that later achieves autonomy. That’s kind of a weird thing, but we don’t think of it as weird because it’s so common. But if you think about it, it is weird because you’re creating an entity that didn’t exist before and now it’s walking around and making autonomous decisions, doing things for good or ill. That’s something you did. You created that.”
The only difference with AI, Kingwell says, is the pace of development will be much quicker for AI beings and their conception will be handled by big corporations with the money and technological prowess to create a “non-biological” entity.
“That’s another conversation we’re not having yet."
In the end, Kingwell argues any conversation about the emergence of AI is really a conversation about ourselves.
“I think it’s really raising questions about what it means to be human, rather than what it means to be artificial,” he says.
“That, to me, is the interesting philosophical territory here.”