Why We Urgently Need to Rethink Coursework

Why We Urgently Need to Rethink Coursework

My students have been initially reluctant to delve into using AI to assist with completing assignments. I’d like to think that this is because the value of ‘integrity’ is so ingrained into the fabric of the school that any use of AI at all is considered a complete ‘no-no’! The harsh reality and message I give to my students is:
“You are competing against students who are using AI.”

Does this mean students should be using generative AI to do the work for them? Not if it is breaking the rules, no. But does every student know how they could use the immense benefits of AI without breaking the rules? And even if students remain within the rules, do they have an unfair advantage against peers who don’t use AI?

I’ve been considering these questions as students have been completing their A-Level Computer Science coursework.

The Current Guidance: is it enough?

I welcomed OCR’s Ceredig Cattanach-Chell’s advice on how students can and must not use AI (OCR blog). In summary:

  • AI can be used to generate ideas and support the coding project, provided it is well documented.
  • Use of AI must not be disguised or substantial.

However, with the emerging capabilities of AI, I think we need to go further. There is one additional, critical, and arguably legitimate way that students could use AI to gain an advantage: assessment.

The Hidden AI Advantage: Pre-Submission Assessment

Teachers are not allowed to give specific feedback to students. To do so would be malpractice and risks serious repercussions. Once a project has been submitted and marked, students are not allowed to make further changes, period. However, what is there to stop students from giving AI their project along with a rubric and a suitable prompt to get an assessment of their work before the final hand-in?

AI makes a fine job of marking and pointing out areas for improvement. I should know, because I did it with a project I wrote. With a few lines of code I was able to provide a project, rubric and instructions to an AI model with the AI-generated assessment comments outputted to a Google Docs document.

Of course, a note of caution: if a teacher were to do this with a live student project using a public AI model, they would need to consider the consequences of sharing student data with a third party. Do they have permission to do so? Are they aware of how the data could be used or stored by the third party? JCQ state that “AI tool cannot be the sole marker.”

How Effective Is AI at Marking?

While AI marking isn’t flawless, it does an excellent job of justifying strengths and weaknesses. For example:

Strength:
“You justify why it is amenable to a computational approach, particularly by explaining how manual tournament management is time-consuming and prone to error.”

Weakness:
“The justification is a little generic. For example, you mention scalability and only having to build a single solution (website approach). It would add greater depth if you related this specifically to your chosen language stack and database. Why is PHP and MySQL well-suited for scalability (if that is your argument)? Why would creating Android/iOS versions be harder or time-consuming?”

Students could use such feedback to address overlooked gaps in their coursework and better argue the awarded mark for their work.

The Growing Divide: AI-Savvy vs AI-Shy Students

Students with the AI advantage will use it quite legitimately to:

  • Generate ideas and clarify thoughts
  • Help to write and optimise code
  • Assess their coursework to suggest improvements

To ensure fairness, all students need to be made aware of how to use AI effectively and ethically. And, frankly, they aren’t.

Despite having highly competent student programmers in the current cohort, no-one has (as far as I am aware!) decided to make use of AI for assessing their work.

Furthermore, to ensure fairness, all students need to have the the confidence to use and correctly acknowledge its use to remove the fear of “doing it wrong” and caught out for malpractice.

There is a risk that the growing divide between AI-savvy and AI-shy students will exacerbate inequalities. Those with regular access to AI tools and the skills to use them effectively will naturally gain an advantage. Less tech-literate students may struggle to compete.

The Role of Teachers and Schools

It’s not just students who need to adapt—teachers and schools must as well.

  • Teacher training: Schools need to equip teachers with AI literacy to effectively guide students on legitimate AI use and identify misuse.
  • Update policies: As AI capabilities are realised, schools should evolve policies on AI, outlining what is and isn’t allowed. JCQ have already updated the “AI Use in Assessments” policy.
  • AI detection tools: While AI-detection technology exists, it is not fool-proof. Schools may need to rely more on detection tools and interviews to verify a student’s comprehension, and factor this into staff workload.

Possible Solutions to Level the Playing Field

To ensure fairness, schools could adopt the following measures:

  1. Educate all students on AI use:
    Ensure that every student knows they are competing against peers using AI—and is equipped to do the same, ethically.
  2. Standardise AI literacy:
    Teach students how to use AI appropriately for idea generation, debugging, and assessment. Provide clear guidelines on what constitutes legitimate use.
  3. Reinforce the consequences of misconduct:
    Make students aware of the severe consequences of presenting AI-generated work as their own. The threat of being withdrawn from exams or disqualified from coursework should be a strong motivator for integrity.

Future-Proofing Assessment

The current nature of portfolio assessments that demands iterative versions of a project along with explanations of change does make it harder to hide AI overuse.  However, could we go further?

If coursework is to remain viable, we may need to rethink how we assess it.

  • In-class challenges: Timed, supervised, and fully monitored tasks to minimise AI misuse.
  • Oral defences: After coursework submission, students defend their work to teachers, testing their comprehension and authenticity.

Long-Term Risks: Are We Still Assessing the Right Skills?

There is also the long-term risk of students becoming overly reliant on AI.

  • Loss of fundamental skills: If students use AI too extensively, they risk losing core competencies in programming, problem-solving, and research.
  • Assessment validity: Coursework may no longer measure individual ability but instead reflect a student’s AI proficiency.

A Radical Thought: Ditch Coursework Entirely?

From another perspective, I wonder what it would look like if we embraced AI fully.

What if students used all the AI tools available in any way they like and tasked with completing their programming project in 1-3 weeks? Would this not be more akin to the real-world environment we are meant to prepare them for – where professionals have full access to; and control over the extent to which AI is used?

Or would it be more like the “All Drug Olympics” sketch, where athletes, absurdly, compete with every enhancement imaginable—making it farcical? (Watch it here)

Your Thoughts?

What do you think?

  • Are there further ways students could (arguably) legitimately use AI to gain an advantage?
  • Can we mitigate AI overuse without stifling its legitimate benefits?
  • Should we abolish coursework entirely in favour of exams?

How is your school tackling the AI challenge? Share your thoughts on how, or if, we can future-proof coursework.

Comments are closed.