U of T researcher to talk about AI opportunities – and designing household robots

Sanja Fidler will be one of several speakers during campus career event
Photo of Sanja Fidler
Sanja Fidler, an assistant professor in U of T Mississauga's department of mathematical and computational sciences, talks about her research earlier this year at the ElevateTO conference (photo by Chris Sorensen)

Sanja Fidler has a back-of-the-envelope way to track Toronto’s rapid emergence as a global centre for artificial intelligence, or AI: the number of machine learning graduate students who are jumping at offers to do their research at the University of Toronto.

“You typically have some ratio of ‘accepts’ because there is MIT, Stanford and Berkeley and we all compete for the same people,” says Fidler, an assistant professor at U of T Mississauga's department of mathematical and computational sciences and a founding member of the Vector Institute for AI research.

“But this year almost everyone accepted.”

The flood of interest reflects both U of T’s growing global reputation in the booming field of AI and the degree to which Canada’s strategy of investing in AI research through initiatives like Vector, a partnership between U of T, government and industry, is helping to attract and retain top talent. 

That includes top researchers like Fidler, who specializes in computer vision and applied machine learning. She will be speaking at U of T Thursday as part of an AI career event with Santa Clara, Calif.-based Nvidia, a Vector partner that designs graphics processing units, or GPUs, which are often used to handle the intense computations necessary for AI applications.

“I do a lot of things connecting computer vision with natural language – so moving towards robots that are not only going to see but also communicate with people in natural ways,” Fidler says.

“The grand vision is we want to teach a robot how to do anything in a household – make coffee, clean your room. Maybe watch TV with you?”

Kidding aside, developing a Jetsons-like robot is far more difficult than it sounds. While companies like Uber, Google and others are rushing to develop self-driving cars, Fidler says creating robots that can interact with people in their homes is actually a more challenging problem in some respects. 

Whereas cars operate in a relatively contained environment – streets or highways equipped with lanes, signage and stoplights – a robot wandering around your house, by contrast, could come in contact with any number of different objects, from an empty cereal box to a cat sleeping on the window sill.

“Imagine I’m taking a picture in my office,” says Fidler. “There are so many different objects that will appear in the picture and some will only have a very few pixels, a very tiny region in the image. 

“In order to identify all these things, to outline the region and where each object is, as well as determine the [robot’s] task, is a very hard problem.”

The challenges are made even more difficult by the fact that any object a robot “sees” will appear to change as either the object or robot moves around the room. “If I take a pen and rotate it, the image is changing a lot,” says Fidler. “So these neural nets have to capture this kind of variability.”

This is precisely the problem that U of T University Professor Emeritus Geoffrey Hinton discussed on stage at Google’s GO North event last week. Hinton, who does AI research for the search engine giant and is sometimes referred to as the “godfather” of deep learning, recently published two papers that offer a new approach called “capsule networks” that aim to make it easier for computers to recognize the same object from different perspectives. 

Read the Wired magazine story on Hinton's latest research

“This is how research is driven,” Fidler says. You always want to think of better ways to do things. Everyone right now is thinking about convolutional neural nets for images, but maybe there’s a better way to design these nets.”

Fidler says her goal at the Nvidia event, which also feature presentations by two Nvidia researchers, will be to inspire students by emphasizing how “cool” AI research is and the number of truly difficult problems that need to be solved.

“Imagine you’re on the team that builds a car that’s going to drive itself,” she says, noting that her U of T colleague, Associate Professor Raquel Urtasun, who heads up Uber’s self-driving vehicle lab in Toronto, recently drew a crowd to an Uber career event at U of T. 

“You’re going to go down in history.”

And creating robot assistants who will do your laundry? 

“I think that’s huge, too,” Fidler says. “But it’s also a little more out there – farther into the future.”

The Bulletin Brief logo

Subscribe to The Bulletin Brief

UTC