IBG's policy on students' use of AI tools in examining tasks

IBG's policy on students' use of AI tools in examining tasks, such as thesis work and other written assignments within courses.

There are many different types of AI tools, but we focus primarily on the large language models (LLMs) that have gained widespread use in the past year, such as ChatGPT, Bing, and Bard.

LLMs are language models and not factual models. They are trained on large volumes of text from the internet and other sources, which may contain inaccurate information. LLMs usually reproduce the information linguistically correct with great confidence. However, it is common for the response to consist of a mixture of unrelated facts. For an uninformed person, the LLM's answer may look very good, but the accuracy of what AI tools deliver cannot be trusted because they can make incorrect combinations of information.

LLMs cannot replace your knowledge or effort. LLMs can be helpful if used wisely. If you have difficulty expressing yourself, LLMs can provide suggestions for you to evaluate. Remember that you are ultimately responsible for the text and factual content you submit for examination.

Theses and other written examination tasks must be created by you as a student, not by an AI tool or any other person. Written course elements are crucial for developing your ability to write scientific or popular science texts, as well as exam responses and lab reports.

You need to understand your subject to assess the accuracy of what an LLM generates. Therefore, it is not sufficient to read an AI-generated summary before your writing; as a student, it is important to search for scientific original references, read them critically, and possibly use and refer to them in your work. Text generated by AI tools cannot be used as a source to cite in written assignments because it is not an original source, and AI-generated text cannot be used in theses or other written course elements because the text is not written by you.

In specific courses and course elements, it is up to the examiner to determine whether AI tools are allowed or not. If Ouriginal or other tools flag that you have not written the text yourself, the usual disciplinary measures apply (see our web page for plagiarism and cheating). Since you cannot always know where an LLM has obtained its text, there is a risk of plagiarising someone else's text if you use text from an LLM verbatim.

FOLLOW UPPSALA UNIVERSITY ON

facebook
instagram
twitter
youtube
linkedin