Brave new tech: Experts say AI tools like ChatGPT – and the ethical questions they raise – are here to stay

(Clockwise from top left) Catherine Moore, Ashton Anderson, Karina Vold, Paul Bloom, Valérie Kindarji and Paolo Granata (supplied images, photo of Bloom by Greg Martin)

(Clockwise from top left) Catherine Moore, Ashton Anderson, Karina Vold, Paul Bloom, Valérie Kindarji and Paolo Granata (supplied images, photo of Bloom by Greg Martin)

This image was created by directing Dall-e to produce an image of the University of Toronto in the style of painter Vincent van Gogh’s The Starry Night (Image by DALL-E/ directed by Chris Sasaki)

As artificial intelligence (AI) continues to rapidly advance, there has been a surge in the development of AI-powered content creation tools like ChatGPT and Dall-e that offer users a range of personalized experiences. However, with this growth come concerns about the potential dangers and ramifications of such apps, from privacy concerns to the displacement of human workers.

For example, the previous paragraph was written by ChatGPT, illustrating the blurring of lines between AI- and human-generated content. And the image at right was created by directing Dall-e to produce an image of “the University of Toronto in the style of van Gogh’s The Starry Night.”

In recent months, news headlines have outlined the issues relating to generative AI tools and content. Illustrators, graphic designers, photographers, musicians and writers have expressed concerns about losing income to generative AI and having their creations used as source material without permission or compensation.

On the academic front, instructors are having to cope with students submitting work written by ChatGPT and are re-evaluating how best to teach and assess courses as a result. Institutions such as U of T are examining the ramifications of this technology and providing guidelines for students and instructors.

Despite the challenges, many experts say that the technology is here to stay, and that our focus should be on establishing guidelines and safeguards for its use, while others look to its positive potential.

Faculty of Arts & Science writer Chris Sasaki spoke with six U of T experts about the impact of generative AI tools – and the ethical questions posed by the new technology.

Ashton Anderson

Assistant professor, department of computer science

We are increasingly seeing AI game-playing, text generation and artistic expression tools that are designed to simulate a specific person. For example, it is easy to imagine AI models that play in the style of chess champion Magnus Carlsen, write like a famous author, or interact with students like their favourite teacher’s assistant. My colleagues and I refer to these as mimetic models – they mimic specific individuals – and they raise important social and ethical issues across a variety of applications.

Will they be used to deceive others into thinking they are dealing with a real person – a business colleague, celebrity or political figure? What happens to an individual’s value or worth when a mimetic model performs well enough to replace that person? Conversely, what happens when the model exhibits bad behaviour – how does that affect the reputation of the person being modeled? And in all these scenarios, has consent been given by the person being modelled? It is vital to consider all of these questions as these tools increasingly become part of our everyday lives.

Paul Bloom

Professor, department of psychology

What ChatGPT and other generative AI tools are doing right now is very impressive and also very scary. There are many questions about their capabilities that we don’t know the answers to. We don’t know their limits – whether there will be some things that a text generator is fundamentally incapable of doing. They can write short pieces, or write in the style of a certain person, but could they write a longer book?

Some people don’t think they’ll be capable of a task like that, because these tools use deep-learning statistics – they produce sentences, then predict what comes next. But they lack the fundamentals of human thought. And until they possess those fundamentals, they’ll never come close to writing like we do. We have many things they don’t: we have a model of the world in our minds, mental representations of our homes, our friends. And we have memories. Machines don’t have those and until they do, they won’t be human – and they won’t be able to write, illustrate and create the way we do.

Paolo Granata

Associate professor, Media Ethics Lab; book and media studies, St. Michael’s College

AI literacy is key. Whether something is viewed as a threat or an opportunity, the wisest course of action is to comprehend it. For instance, since there are tasks that AI does more effectively than humans, let’s concentrate on tasks that humans do better than AI. The emergence of widely accessible generative AI technologies should also motivate educators to reconsider pedagogy, assignments and the whole learning process.

AI is an eye-opener. The function of educators in the age of AI has to be re-evaluated – educators should be experience-designers rather than content providers. In education, the context is more important than the content. Now that we have access to such powerful content producers, we can focus primarily on a proactive learning approach.

Valérie Kindarji

PhD candidate, department of political science

While public focus has been on the disruptive AI technologies themselves, we cannot forget about the people behind the screen using these tools. Our democracy requires informed citizens with access to high-quality information, and digital literacy is crucial for us to understand these technologies so we can best leverage them. It is empowering to have access to tools which can help spark our creativity and summarize information in a split second.

But while it is important to know what these tools can do to help us move forward, it is just as important to learn and recognize their limitations. In the age of information overload, digital literacy can provide us with pathways to exercise our critical thinking online, to understand the biases impacting the output of AI tools and to be discerning consumers of information. The meaning of literacy continues to evolve with technology, and we ought to encourage initiatives which help us learn how to navigate the online information ecosystem. Ultimately, we will be better citizens and neighbours for it.

Catherine Moore

Adjunct professor, School of Cities; Faculty of Music

Would seeing a credit at the end of a film, ‘Original score generated by Google Music,’ alter my appreciation of the score? I don't think so. Music in a film is meant to produce an emotional impact. That’s its purpose. And if a score created by AI was successful in doing that, then it’s done its job – regardless of how it was created.

What’s more, generative AI “composers” raise the questions: What is sound; what is music? What is natural sound; what is artificial sound? These questions go back decades, with people capturing mechanical sounds or sounds from nature. You speed them up, slow them down. You do all sorts of things to them. The whole electro-acoustic music movement was created by musicians using technology to manipulate acoustic sounds to create something new.

I see the advent of AI-generated music as part of a natural progression in the long line of music creators using new technologies with which to create and produce – in order to excite, intrigue, surprise, delight and mystify listeners the way they always have.

Karina Vold

Assistant professor, Institute for the History & Philosophy of Science & Technology; Centre for Ethics; Schwartz Reisman Institute for Technology & Society

The progress of these tools is exciting, but there are many risks. For example, there’s bias in these systems that reflects human bias. If you asked a tool like ChatGPT to name ten famous philosophers, it would respond with ten Western male philosophers. And when you then asked for female philosophers, it would still only name Western philosophers. So, GPT-4 is Open AI’s attempt to respond to these concerns, but unfortunately, they haven’t all been addressed.

In his book On Bullshit, [moral philosopher] Harry Frankfurt argues that "bullshitters" are more dangerous than liars, because liars at least keep track of their lies and remember what’s true and what’s a lie. But bullshitters just don't care. Well, ChatGPT is a bullshitter – it doesn’t care about the truth of its statements. It makes up content and it makes up references. And the problem is that it gets some things right some of the time, so users start to trust it – and that’s a major concern.

Lawmakers need to catch up in terms of regulating these generative AI companies. There’s been internal review by some companies, but that’s not enough. My view is there should be ethics review boards and even laws regulating this new technology.

Faculty of Arts & Science