Being smarter than AI – rethinking our assessment design?

The discussion channels used by educational developers have been dominated by the topic of ChatGPT since the New Year. There are two dimensions to this: concerns over academic integrity and the opportunity ChatGPT and other AI tools bring to assessment.

Addressing anxieties over concerns about academic integrity is more urgent. Our academics need to be aware of the risks and have strategies for managing them so their confidence in managing the integrity of assessment is not undermined. It is less urgent we explore the opportunity that AIs like ChatGPT bring. However, the latter supports the former – the design of smart assessment will ensure that we are assessing the right things: actual student work and criteria that evidence learning, rather than engagement or delivery of assessment artefacts or products.

Smart assessment is authentic assessment. By this I mean we need to use methods that challenge students to think through and deal with problems that have demonstrated: personal agency within the task; currency; require inquiry investigation and deduction; are open-ended and not definitive; are likely to be socially mediated or negotiated.

This is not new. Smart authentic assessment is reflected in a curriculum designed around student-centred active learning and the above list echoes the components of authentic learning as proposed by Rule in her 2006 editorial ‘Components of Authentic Learning’ where her review of literature defines authentic learning as,

real-world problems that engage learners in the work of professionals; inquiry activities that practice thinking skills and metacognition; discourse among a community of learners; and student empowerment through choice.

Rule, 2006

There are significant implications in revising or adopting an active and authentic assessment strategy, not least that this qualitative approach needs to be manageable. A quick ‘solution’ to manageability may be to have two components to an assessment – perhaps even an additional small qualitative approach.

The ideal assessment method is not something that scales: the viva voce. However, it gives us a clue to other manageable methods: the viva validates a more substantial piece of work giving the assessor confidence of the authorship of the dissertation.

With that model in mind, we can be less concerned about the authorship of a substantial submitted artefact if we can find straightforward and manageable ways to assess a student’s understanding of their substantial submission. Methods including self- and peer- assessment may be helpful here, but also techniques such as multiple choice or short essays can be used. Using and assessing peer feedback is another approach that springs to mind.

Smart assessment focuses on the experience of learning and a student’s justification or defence of their argument or rationale. We need to assess the student’s role in the experience e.g. solving a current problem, undertaking a project (the process, not the product), examining case studies, conducting context-specific case studies, assembling a portfolio of evidence and analysing it, etc. We need to look at:

  • What process did you follow and why?
  • What decisions did you take in your study, why, and which decisions were good/bad and why?
  • What theoretical frameworks did you use? Why? Were they reliable? Why/not?

This requires we focus on:

  • the learning and assessment process and accumulated evidence rather than end-products of learning (learning is an active state, not an end state!)
  • the marking criteria so that it reflects the student’s explanation and reasoning for their action – how and why they arrived at a conclusion or solution and implications (further actions needed).
  • embrace and value nuance and complexity (found in current and local contexts or situations) and the student’s active engagement and agency over the task.
  • the justification of selected references (they must decide what was most useful and why)
  • how assessment should accommodate diversity and not uniformity – with the diversity of possible responses reflecting the diversity of the participants.

These foci help us to imagine assessment methods can reflect the richness of our subject, are part of a learning-centred design, and are manageable and rich experiences for all.

Reference

Rule, A. (2006). Editorial: the components of authentic learning. Journal of Authentic Learning, 3(1), 1-10. Online at: https://dspace.sunyconnect.suny.edu/bitstream/handle/1951/35263/editorial_rule.pdf

Advertisement

About Andrew Middleton

NTF, PFHEA, committed to active learning, co-operative pedagogies, media-enhanced teaching and learning, authentic learning, postdigital learning spaces. Key publication: Middleton, A. (2018). Reimagining Spaces for Learning in Higher Education. Palgrave.
This entry was posted in Academic Innovation and Possibilities, Active Learning, Assessment & Feedback, Creativity and tagged , , , , , , , , , , , , , , , , , , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s