Prohibiting the use of Generative AI in coding assessments

Description: While it is likely preferable to find ways to work with generative AI when designing assessments, it may not always be possible to do so, given time commitments etc. The assessment example below, contributed in consultation with James Salamy, chief examiner of the unit ECE2071, provides an overview of approaches taken in programming assessments where AI was not allowed.

Assessment types applicable: Software engineering/ Entry level language learning

Unit context: Entry level programming relies on students coming to terms with low-level concepts around the syntax used to instruct computers to perform tasks, and control flow operations required to obtain various functionality. For the most part, this requires a degree of rote learning, which can be a tedious learning process. Assessments where students are required to learn these syntax rules in context while implementing or solving another task offer a more attractive route to teach programming.

However, the ability of coding assistants to convert free text instructions to code mean that students can often bypass these learning opportunities and never obtain the fundamental knowledge required to perform higher level programming tasks, or to interpret and interrogate code generated by a coding assistant.

An example chatgpt program output in response to the question "Write a c function to count the number of vowels in a sentence"

Chatgpt can easily solve assessment tasks used to teach syntax and programming rules.

Learning objectives: This poses a challenge when learning objectives are low level. eg. Demonstrate an understanding of standard data types, control statements, functions, pointers, strings, arrays of pointers, structures, linked lists, binary tree data structures and dynamic memory allocations.

It is clear that this assessment task either needs a redesign, or for AI use to be restricted.

How AI use can be restricted:

Students could be given the following AI usage statement and context:

“Generative AI tools are restricted for certain functions in this assessment task” In this assessment, you can use generative artificial intelligence (AI) in order to develop your understanding of your code (eg “explain how this works to me”) only, Any use of generative AI must be appropriately acknowledged. You must not use generative artificial intelligence (AI) to generate any materials or content for submission.”

How is this enforced:

In final assessments, invigilation is used and access to tools restricted. In project-based units tasks are completed in class and marked by interviews, where students are required to explain how they solved the assigned task, ensuring that they meet the learning objectives.

Things to consider when adapting to your context:

Is there an easier way? Perhaps re-framing this assessment as a Socratic conversation where students learn about syntax in conjunction with the generative AI tool would be more appropriate, or where students are required to discover syntax errors or logic flaws generated by the AI. Alternatively, increasing expectations around the scope of tasks to be solved (eg. through multistage project-like assessments) may be a good way to prepare students for real world practical use cases.

Previous
Previous

Helping assessments meet learning outcomes while working with AI

Next
Next

Create simulated environments for stakeholder interaction