Who owns your face? Scholars at U of T's Schwartz Reisman Institute explore tech's thorniest questions

a man pays for his subway fare using face recognition technology
Who owns the data generated by facial recognition tools? Can AI help cities budget more fairly? These are the types of questions being tackled by experts at U of T's Schwartz Reisman Institute for Technology and Society (photo by Weiquan Lin/Getty Images)

There are no easy answers when it comes to protecting people’s rights in the digital domain.

Take, for example, your face. Clearly, it belongs to you. But that’s not necessarily the case when you use it to unlock your smartphone or post an image of it on social media – in both instances your likeness is transformed by a third party into a stream of data.

Wendy Wong

“Right now, we really don’t have a lot of agency over our data, even though it stems from really mundane activities,” says Wendy H. Wong, a professor of political science in the University of Toronto’s Faculty of Arts & Science and a faculty affiliate at the Schwartz Reisman Institute for Technology and Society.

“It is generated about you, but you don’t actually create that data yourself.”

The Canada Research Chair in Global Governance and Civil Society, Wong is working to bridge the divide between rapid technological innovation and society’s capacity to develop rules and regulations to govern it.

She is exploring how challenges in governing data and artificial intelligence are forcing us to re-examine our perspective on human rights. Called “Human Rights in the Digital Era,” Wong’s project – one of the major research projects underway at the Institute – looks at how the proliferation of data has fundamentally changed what it means to be human, how we relate to one another, and what it means to have rights in the digital era.

An Institutional Strategic Initiative (ISI) that launched in 2019, the Schwartz Reisman Institute’s mission is to ask critical questions and generate deep knowledge about the increasingly important – and potentially fraught – relationship between technologies and societies by fostering research-based collaborations between computer scientists, social scientists and humanists. It’s supported by a historic $100 million donation to U of T from Gerald Schwartz and Heather Reisman – a gift that is also underpinning construction of Canada’s largest university-based innovation hub: the Schwartz Reisman Innovation Campus.

“Toronto is home to some of the key innovations that have powered the explosion of AI over the last decade,” says Gillian Hadfield, the institute’s director and a professor in the Faculty of Law who is the Schwartz Reisman Chair in Technology and Society and was recently named a CIFAR AI Chair. “This generates the capacity for expertise and collaborations for people interested in solving problems.”

“The Schwartz Reisman Institute for Technology and Society can play a great role in helping grow the vibrancy of the community and the potential for Canada to grow such technology.”

Who owns your face?

In the case of facial recognition tools, Wong says the rapid growth and adoption of the technology by everyone from smartphone-makers to police departments is raising important questions about ownership and privacy, and how personal aspects of our lives – such as our faces – can be taken from us as data, without our knowledge.

Gillian Hadfield

For example, Canada’s privacy commissioner said in 2021 that the RCMP had violated the Privacy Act by using the services of Clearview AI, a U.S.-based facial recognition company. In an earlier decision, it also found Clearview in violation of privacy laws after it collected three billion pictures of Canadians, without their consent, from websites for criminal justice purposes.

Writing about the decision in the Globe and Mail last year, Wong noted that there is no definite answer as to who owns the data generated by our faces, making international human rights frameworks a vital touchstone in guiding the future of this space.

Can we ever properly consent to having our faces made into data? In the best of times, consent is a challenge to define,” Wong wrote. “In the age of datafication, it has become almost impossible to take someone’s ‘consent’ as meaningful.”

As technologies push against questions about human rights, there is still a lot to learn in understanding what it means to be human in the digital era. 

Part of this includes challenging what we used to take as fact – like ownership of our faces – especially when it is impossible to opt-out of using anything digital, Wong says. 

Human rights on social media – who makes them?

Another thorny issue, says Wong, is how freedom of expression is being regulated by the
Big Tech companies that encourage users to scroll through countless hours of social media on their platforms.

Traditionally, human rights – including freedom of expression – govern relationships between states and people. As a result, Wong says existing human rights frameworks are insufficient to oversee tech giants and their platforms, which straddle both the private and public spheres. 

Wong notes, however, that corporations such as Meta, which owns Facebook and Instagram, employ their own community standards and have made attempts to self-regulate. Meta’s Oversight Board, for one, is an independent body that evaluates decisions made by the company to remove or keep problematic content and profiles on Instagram and Facebook. 

The Global Network Initiative, a non-governmental organization spearheaded by technology companies and academics, is another effort grappling with questions about how corporations should protect values like freedom of expression and privacy. 

Wong says she plans to further explore the global impact of these and other bodies – both through her work at the institute and in her forthcoming book with MIT Press. 

Empowering communities through algorithmic fairness

While technological advancement has created many new questions, it also promises to provide answers to many longstanding problems. 

Nisarg Shah

Nisarg Shah, an assistant professor in the department of computer science in the Faculty of Arts & Science, is designing new approaches for balloting, fairness considerations and allocation rules to explore how AI technologies can be used for participatory budgeting – a democratic process that empowers residents to control how public funds should be used in their communities.

“When people talk about algorithmic fairness, they think about technology making decisions for people,” says Shah, who is one of four U of T faculty members, awarded with an inaugural Schwartz Reisman Fellowship. 

“Sometimes, algorithms make mistakes, and the question is whether they might impact some communities more than others.”

A participatory budget model starts with community consultations, followed by various rounds of discussing community proposals on how much of the public budget should be allocated to each project. Finally, residents vote for their choice, which is then aggregated into a final budget.

Shah designed approaches centered around identifying avenues to elicit people’s preferences and ensure a fair allocation of the budget with respect to their needs. This included participatory budget models based on happiness derived from a project or based on the cost of implementation.

Consider a hypothetical example outlined in Participatory Budgeting: Models and Approaches. 3,000 residents vote on allocating a $7 million budget to four projects: A and B (each cost $3 million), C (cost of $2 million) and D (cost of $2 million). Two thousand residents like only projects A and B, 500 like only C, and the remaining 500 like only D. In this example, projects A and B could be implemented, which would make 2,000 residents “very happy” but the rest “very unhappy.” Or, one of projects A and B could get the green light together with both projects C and D. This would make 2,000 residents “partially happy” and 1,000 residents “very happy.” What would be the fair choice?

Toronto piloted participatory budgeting from 2015 to 2017 in Scarborough and North York. Overall, the pilot study found that residents wanted more input on infrastructure projects and more opportunities to consult city staff on various issues. However, it found participatory budgeting was also resource-intensive and could result in divisions in communities.

As Shah continues to develop fair approaches to participatory budgeting, he’ll also explore how proportional representation, which ensures each neighborhood gets an adequate amount of representation – be it monetary or political – commensurate with the people living there, can help curb another issue known as political gerrymandering – when boundaries of electoral districts are altered for political advantage, giving some communities more voting rights than others.

Investing in the future

As researchers at the Schwartz Reisman Institute navigate the promise and pitfalls of existing technologies for society, Hadfield says SRI is simultaneously investing in initiatives that aim to influence the direction of future technological development.

In an effort to promote responsible, ethics-based AI technologies, SRI partnered with the Creative Destruction Lab (CDL) at the Rotman School of Management last summer to provide mentorship and support to startups in the incubator’s AI stream. This includes Private AI, which protects privacy by developing AI software that erases personal data from text, images and video, and Armilla AI, an AI governance platform enabling algorithmic accountability.

The Schwartz Reisman Institute also ran a one-day workshop with the Business Development Corporation of Canada (BDC), which provides business loans to small and medium Canadian enterprises, and hosted panels with government regulators, regulatory technology providers and SRI researchers to connect about establishing a fair, responsible Canadian AI industry.

With regulatory transformation a strategic goal at SRI – and a focus of Hadfield’s current research – SRI will partner with governments, civil society organizations and other institutions to offer new ideas about regulatory frameworks to guide digital transformation.

This article is part of a multimedia series about U of T's Institutional Strategic Initiatives program – which seeks to make life-changing advancements in everything from infectious diseases to social justice – and the research community that's driving it.

UTC/Freelance