ChatGPT 101: The risks and rewards of generative AI in the classroom

U of T's Susan McCahan says it's important for teachers to be clear on what technology is – and isn’t – allowed in their courses
A smartphone displaying OpenAI's ChatGPT app

(photo by Jakub Porzycki/NurPhoto/Getty Images)

The rise of generative artificial intelligence tools like ChatGPT is prompting many educators to reimagine the role of technology in the classroom.

""
Susan McCahan (supplied image)

At the University of Toronto, Susan McCahan, vice-provost, academic programs and vice-provost, innovations in undergraduate education, has been on the front lines of the response to this fast-evolving technology.

McCahan, a professor of mechanical and industrial engineering in the Faculty of Applied Science & Engineering, says the proliferation of generative AI tools presents both opportunities and challenges for higher education.

Her office is supporting projects on the applications of generative AI in teaching and learning and providing guidance to help instructors navigate this emerging technology.

She recently spoke to U of T News about the lessons that have been learned about the academic implications of generative AI and the big questions that still remain.


What are some of the ways generative AI is impacting teaching and learning?

Large language models have significant implications for how we teach coding and writing because it will change the way people code and write – particularly when it comes to routine tasks.

A lot of the writing I do in a day isn’t deeply intellectual. It’s the kind of writing that LLMs do pretty well. However, it’s probably not going to write as well as me when I’m writing an academic paper, because of my knowledge and understanding of the field and my own unique perspective.

Right now, the technology is pretty good at writing at the level of a first-year or second-year student, but it’s not up to what would be expected of a student in their third or fourth year.

The biggest challenge is making sure students are still progressing to that third- or fourth-year level if they are taking shortcuts in their first years of university – or even high school or middle school.

People have compared this to a calculator, but I don’t think that’s the right analogy because a calculator is a very domain-specific tool and generative AI has much broader applications.

There was an existential crisis in math education in the 1980s when calculators capable of symbolic manipulation came along. Educators questioned if we should teach our students how to do differentials and integrals if these programs can solve those complex equations. Yet, we came through that, and we still teach students how to add and subtract, multiply and divide, do differentials and integrals. We also teach students how to use these symbolic manipulation programs in ways that allow them to go deeper than if they were to do it all by hand.

I think we will come to a point where people recognize when it is useful to use AI to help and when is it not going to be very helpful. Hopefully, we will arrive in a place where it allows people to advance through the basics faster and move on to more complex writing and coding.

Does U of T consider the use of generative AI tools to be cheating?

We expect students to complete individual assignments on their own. If an instructor decides to explicitly restrict the use of generative AI tools, then their use would be considered an “unauthorized aid” under the Code of Behaviour on Academic Matters. This is considered an academic offence and will be treated as such.

Some might ask why we don’t classify this as plagiarism. One of the biggest misconceptions that people have is that LLMs take what’s on the internet, mash up the text and ideas and repackage it as a compilation. However, that’s not how the technology works.

Tools like ChatGPT are trained on large amounts of online materials to identify patterns of speech and make predictions about words most likely to go together. If I say, “one, two, three,” it knows that “four” probably comes next. It knows “four” is a noun, but it doesn’t associate the concept with a square or the horsemen of the apocalypse.

When you enter a prompt into ChatGPT, it’s not combing through information to produce sentences or paragraphs or ideas – it’s making word-by-word predictions that imitate patterns of speech around a subject. That’s why we don’t treat the use of these tools as plagiarism; we treat it as an unauthorized aid.

What resources are available to help instructors adapt to this emerging technology? Are there any best practices they should follow?

We’ve put together an FAQ addressing some of the considerations around generative AI, while providing instructors with resources to help them communicate what technology is – or isn’t – allowed in their courses.

I think we’re in a moment when it’s really important for faculty to be really clear on their syllabi about whether they explicitly allow it or explicitly don’t. If it is permitted, it should be clear how AI tools can be used, for what assignments and to what degree, and if students must explain, document or cite what tools they use and how.

This is new, and both faculty and students are not altogether clear if this will be the next Wikipedia of the world – where everyone uses it, but no one talks about it anymore. Or if it should never be used because it’s just unreliable.  

What are some other considerations around the use of generative AI in an academic context?

LLMs often get things wrong – and very confidently wrong. For example, back in January, I asked ChatGPT for my biography. It told me that I had worked at the University of British Columbia and I was a leading researcher in biomedical engineering – things that seem believable, but are factually untrue. The technology has improved since then, but LLMs still get things wrong in ways that are not immediately apparent or obvious. These are called “hallucinations,” and they can be so subtle that they’re hard to detect unless you really know the subject.

Ultimately, the student is responsible for the material they submit, and if they’re submitting material that is factually wrong, they’re responsible for it. You can’t blame the chatbot, the same way the chatbot can’t take credit. It’s not like a team project where you’re working with another student, and you can say, ‘It wasn’t me, it was my partner.’ If your partner is AI, you are responsible for all of the work you submit whether or not there are parts that were co-created with AI.

The Bulletin Brief logo

Subscribe to The Bulletin Brief

UTC