Skip to content

Half of all students use generative AI - here's why we should let them


Professor Hassan Ugail

Professor Hassan Ugail is Director of the Centre for Visual Computing and Intelligent Systems at the University of Bradford. Here he comments on a report by the Higher Education Policy Institute that shows more than half of all students now use generative AI - he argues we should let them.


Today (Thursday February 1, 2024), the Higher Education Policy Institute (HEPI) published its findings into how university students are interacting with generative AI. Its report, titled Provide or Punish? Students’ views on generative AI in higher education, found that more than half of students polled admitted to using generative AI for help on assignments.

Since ChatGPT was released in November 2022, generative AI has caused an explosion of interest. Inevitably, large language models such as ChatGPT have emerged as a ubiquitous tool in the academic landscape.

HEPI’s survey is the first UK-wide study to explore students’ use of generative AI since ChatGPT was released. They polled over 1,200 undergraduates. Their findings - that use of generative AI has already become normalised - should not come as a surprise.

Like any new technology, it can be used for both good and bad. However, while some people have argued that students should not use language models such as ChatGPT and Google Bard, I would argue the opposite.

Personally, as an academic whose day-to-day work involves interacting with AI, I use generative AI models like Copilot, Stable Diffusion and ChatGPT, to kickstart ideas for projects and generate snippets of computer code while writing large computer programs. To me, the argument is simple: generative AI is a tool (much like the calculator) that can not only save us time but advance our understanding.

Robot hand reaching for human hand

I could spend a week writing a piece of computer code fixing syntax along the way. ChatGPT will do it for me in a matter of minutes. The result won’t be perfect, but that is the point at which human oversight becomes important.

Likewise, students can also benefit from using generative AI, so long as they do not pass it off as their own work and make it clear why they have used it.

We should recognise generative AI for what it is: a vast knowledge repository, rather than perceiving it (as some have) as an entity of intelligence.

It serves as an invaluable resource akin to a sophisticated search engine. Embracing this reality, we are beginning to advocate for the constructive use of ChatGPT among students, steering clear of outright prohibition. To address plagiarism concerns, we have recently implemented a strategy where students are encouraged to submit both their original work and versions generated by ChatGPT. This approach will not only help significantly reduce the likelihood of plagiarism but also help them realise the limits of these tools and appreciate the power of their own creative thought.

ChatGPT and generative AI in general is a boon for humanity. But, just like Google, you cannot trust everything that comes from it. 

Professor Hassan Ugail

When calculators were first introduced, people were against students using them. Now all students use them. We simply changed our assessment methods. It’s the same with ChatGPT. I think we should let students use it.

In my view, we should embrace it. 

What needs to happen now is an acceptance of the reality that generative AI is here and that students are using it. Prohibition will not work. Therefore, academic institutions need to implement clear policy guidelines on the use of AI, because at the moment, there is confusion. According to the HEPI survey, the majority of students consider it acceptable to use generative AI for explaining concepts (66%), suggesting research ideas (54%) and summarising articles (53%), but only 3% think it is acceptable to use AI text in assessments without editing.

A majority of respondents (63%) thought their institution had a ‘clear’ policy on AI use, with only 12% thinking it is not clear. Two-thirds of students (65%) also thought their institution could spot work produced by AI.

Students also think institutions should provide more AI tools. While three-in-10 (30%) agree or strongly agree their institution should provide such tools, fewer than one-in-10 (9%) say they currently do so.

Only a fifth of students (22%) are satisfied with the support they have received on AI. Most students (62%) are neutral or say they do not know.

But the kicker is that 73% of students expect to use AI after they graduate. If that is the case, we surely need to ensure the responsible use of generative AI.

Josh Freeman, Policy Manager at HEPI, has rightly pointed out the danger of the creation of a ‘digital divide’, with some students being completely at ease with generative AI, and others having no contact whatsoever.

Today’s world has changed. It is now no longer a question of if you will encounter AI but when and where. AI has already changed all our lives and it will continue to do so in the years ahead. As centres of learning, schools, colleges and higher education institutions have a responsibility to ensure students are educated in the effective use of AI.