Twalking as digital placemaking #twalk #socmedhe17

This 10 minute screencast introduces the key ideas of the twalk concept – learning walks with integrated tweetchats – and the pedagogic rationale underpinning walking, talking, tweeting and thinking.

The video is part of the Twalk Toolkit being developed on the Media-Enhanced Learning Special Interest Group site in time for our workshop at the Social Media for Learning in Higher Education conference (#socmedhe17) at Sheffield Hallam University next week.

Advertisements
Posted in active learning, BYOD, Digital Placemaking, learning space, MELSIG, social media for learning, walking | Tagged , , | Leave a comment

From outside-in to inside-out (and possibly back-to-front) rethinking #feedback

Back in the 1990s I was a young developer working in the Learning & Teaching Institute at Sheffield Hallam University. One project that I was not directly involved with, but which I was present for, was on developing the use of feedback. Led by Richard Higgins and Peter Hartley, publications from this work are much cited. I don’t think I’ve reread any until now, but clearly it is ingrained in me! Or is that because Peter was my tutor for a while some years later? Anyway, I have just reread their paper ‘Getting the message across’ (Higgins, Hartley & Skelton, 2001).

It takes a communications perspective to the design of good feedback and challenges preoccupations with a QA approach to examining the quality of feedback as being predominantly a matter of process, and argues for a student-centred view of feedback design.

“the process of feedback as communication is inherently problematic… it is impossible to investigate how an outside influence impacts upon a process if the internal dynamics of that process are not understood — that is, if the true nature of the process remains hidden (or simply assumed).” p. 272

The following is particularly pertinent to the work that I am currently conducting,

“We should be asking how the tutor comes to construct the feedback, how the student understands the feedback (how they make sense of it), and how they make sense of assessment and the learning context in general.” p. 273

As discussed in previous posts, assessment and feedback is experienced differently by each student. I argue that w recognise this as we design and engage students with the task. Higgins et al. seem to be saying something similar in the following.

“Tutors [cannot] assume that students will understand a list of assessment criteria. Feedback may need to be more dialogical and ongoing. Discussion, clarification and negotiation between student and tutor can equip students with a better appreciation of what is expected of them, and develop their understandings of academic terms and appropriate practices before or as they begin to write. Perhaps we need to shift the emphasis to ‘feeding forward’ into a piece of work, rather than simply ‘feeding back’. ” p. 274

This is where my ‘back-to-front’ comes in – let us focus more on how a student comes to a task  – how they are supported in navigating it – before we dive in to work out why there may be a problem with the feedback, wherever it occurs.

Reference
Higgins, R., Hartley, P. & Skelton, A. (2001). Getting the message across: the problem of communicating assessment feedback. Teaching in Higher Education, 6(2), pp. 269-274. – https://doi.org/10.1080/13562510120045230

Posted in Assessment & Feedback, BYOD | Tagged , , | Leave a comment

“Things that the mind already knows” – what is an assessment? #assessment #briefing

The American artist Jasper Johns, whose retrospective exhibition at the Royal Academy closed over the weekend, was fascinated with exploring the theme ‘Things that the Mind Already Knows’. He is perhaps best known for his interest in iconographic images, especially the American national flag ‘the stars and stripes’.

johns-flag

Here is an example. His interest in representation and interpretation is a subject that has been explored by many others, notably John Berger in his book and 1970s television programme Ways of Seeing. If I asked you about the above image you could tell me that it is the American flag, and we could leave it at that. I could tell you that it is actually a photograph in a webpage. You could tell me that it’s a photograph from an exhibition on your screen. All of these, and many other descriptions, would be factually right. Conceptually, you might tell me that it is this, that or the other. I’m sure this isn’t new to you!

The reason I am discussing this is that on Saturday morning I posted about the importance of briefing students clearly and in the afternoon I walked into this exhibition. It was apparent that Johns was preoccupied with similar matters to do with subjectivity and interpretation.

Looking at that image as a metaphor for assessment, here are three questions: what do I bring to understanding ‘the question’? What does the person next to me bring to their interpretation? How adequate is my photograph in representing what is needed?

Picking up on the last question first, are the floor, the lighting, the wall, or the title card significant? Are the other pictures in the gallery (things we have also experienced, or will come to experience) significant? The first two questions above indicate how the student (or the viewer) brings their own experience and aspirations to the question (the picture) and these must have some legitimacy. When we design an assessment, to what extent do we value the student’s own experience? Are we flexible in terms of methods, modes, contextualisation, opportunities to negation criteria or their weighting for example?

Let’s focus on the implications of all this to adequately setting a question or briefing an assessment task. As already discussed, the task and the feedback it generates can only be as good as the way the assessment is briefed. However the academic conceives the question, it is nothing until it is perceived and then experienced (i.e. we can understand assessment in terms of LeFebvre’s Spatial Triad – assessment is a space that will be navigated and could be negotiated).

However well-crafted the assessment task is, it needs to be checked and tested by others who may read it differently before it is used. Further, in terms of formative activities and coursework assignments, time must be designed into the task and exploited so that students are supported as they come into the task and as they relate it to their on experiential frameworks, past, present and future. On a practical basis this means giving the students real opportunities to check their understanding of the task through tutorials, discussions with peers, or opportunities to draft responses and to reorientate their energies. An early tutorial on a dissertation assignment for example creates a decisive moment for restating, reconsidering and reorienting oneself to the task.

The dangers of invalidating of assessment due to inadequate opportunities to explore the question are real. As an example, I remember when I did my Geography A Level at school. I undertook a summer project which was a key component of the exam. On returning to school, having had no opportunity to review the task, I was told that what I had spent the summer doing was a waste of time. I dropped out of school soon after, disillusioned, albeit with a project of which I was immensely proud but which had been misconceived. Years later I returned to education and discovered I was quite capable academically, but by then my misconception of the task had had serious consequences nevertheless. I felt cheated and, to use the parlance, dissatisfied.

So, reflecting on my earlier post about male students who tell you they are “OK” and that they understand the task you have given them, I urge you to check in with them a little later to find out that they really do see the task in the way you had conceived it. Explore the canvas and its meanings.

Posted in Assessment & Feedback, BYOD | Tagged , | Leave a comment

Learning obscured due to overcrowding

Following on from my previous post on the criticality of students being clear about their assessment, I note that Black & Wiliam (1998, pp. 9-10) make similar points, while talking about self-assessment, about the engagement of students with their assessment.

Pupils can only assess themselves when they have a sufficiently clear picture of the targets that their learning is meant to attain. Surprisingly, and sadly, many pupils do not have such a picture, and appear to have be become acccstomed to receiving classroom teaching as an arbitrary sequence of exercises with no overarching rationale… When pupils do require such an overview, they then become more committed and more effective as learners: their own assessments become an object of discussion with teachers and with one another, and this promotes even further that reflection on one’s own ideas is essential to good learning.

Part of my work at the moment is about reducing the summative load: modules are frequently overassessed. They are over-crowded with little space for adequate briefing and critical engagement by students and staff with the assessment and how it relates to learning outcomes.

Developing clarity about learning outcomes and their assessment requires more time and the right space than is often the case. Activities develop clarity, not only through checking and rechecking with tutors, but through engagement with activities that cause self-reflection, especially where peer co-operation is involved. In this way students can identify misconceptions, and start to address them, before they become critical.

Reference

Back, P. & William, D. (1998). Inside the black box: raising standards through classroom assessment. London: GL Assessment.

Posted in Assessment & Feedback, BYOD | Tagged , , , , , , | Leave a comment

All clear? It’s more than #feedback

I am in the middle of a large programme of work at my university addressing assessment and feedback. I have always understood that addressing the enhancement of academic practice in this area is more complex than some of the hygiene-focused discourse suggests and the conversations we are having with academic staff confirms that designing and delivery effective assessment is indeed a complex matter.

The methodology we are using includes running focus groups with academic course teams. This is turning out to be a real privilege as we get under the bonnet to examine what appears to be student dissatisfaction with aspects of their assessment and feedback experience. The beguiling thing is that we are hearing of some excellent accounts of academic teams doing things by the book, yet receiving poorer than expected scores in the NSS. There are minuscule comments and stories coming out of our rich discussions which we need to work through. However, one of these themes is about the attention we give to ‘the feedback problem’. The more I listen, the more I wonder if this should be reframed as ‘the briefing problem’.

Here’s an example.

Almost lost in the focus group conversation one academic says, “I think it could be about gender.” In this imbalanced course where there is a high proportion of female students with probably about 20% male students, and where incidentally there seems to be a dominance of white male teachers, you can see how our minds might turn to gender equality, but it turned to male student confidence and bluster; how male students will often rapidly assimilate assignment briefs and, being self-motivated but resistant to acknowledging weakness, will engage quickly with briefs having apparently formed a good mental picture of what is required and what they will do. So, when the academic asks, “Is everyone clear?”, those males are likely to give the thumbs up and say, “I’m fine” and get on with it. In the meantime the academic’s attention turns to those who seek further clarification; those who need to talk things through and mull things over. Those who are prepared to admit they don’t understand.

The outcome of a briefing phase such as this is hopefully a bunch of happy students with well-formed ideas of what is expected and at least the beginnings of ideas about how they are going to respond to the brief. At this point let’s put the gender dimension to one side.

Actually what we have is a bunch of students who have constructed a diverse set of mental maps that will, to some extent, determine what they will do in response to the task as they understand it. Their conceptions of the task are likely to be unreliable. Coursework design, where there are multiple opportunities for a student to redraw their map will help to address this, but if this goes unchecked there is a problem looming.

Let’s move to the post-task part of the story.

Students have received their marks. Some will be pleased because it reflects their expectations. They may not even bother picking up their feedback and, if they do, may give it a cursory glance. But let’s return to those students who knew exactly what was required and quickly formed their response. According to staff, they are often disappointed by their mark and in some cases will argue that the marking is unfair. Further, they don’t understand why the feedback is still saying they have not sufficiently met the learning outcomes. Even as they read or listen to the feedback their are given, their mental map remains as a strong interpretation filter or scaffold. They are dissatisfied. In fact some of them may feel hard done by and angry.

From the academic point of view, they have designed a good task and explained what is required. They have spent time clarifying this with those students who admitted they were unclear or confused. They have marked the assignment consistently because they have good moderation methods. They have given good feedback that explains misconceptions and suggests how students can improve in their next task. Where, they ask, are we going wrong? Why do we not improve in the NSS for assessment and feedback?

In this story I think we need to look at the beginning and not at the end. There are two simple points that can be made:

  • Students can benefit from rehearsing with formative assessment so that they can find out before it becomes critical about the importance of reading the question and thoroughly checking their conception of a task.
  • Staff need to consider briefing as a process and not as a straightforward event that happens at the outset of a task.

I have referred to coursework, but it applies to the design of any task in theory. The act of setting the task clearly is absolutely critical to the success of all students. If only some students have the right conception of what is required, then the question is invalid and unfair. Everything that follows becomes meaningless.

The advantage of coursework, in it various forms, is that it specifically creates space to develop understanding. The academic must look at their role in this space too to check that every student is actually clear, even if students confirm they are clear. Once we are sure that that has happened, it is only then appropriate to look at satisfaction with in terms of there being a feedback matter.

Posted in Assessment & Feedback, BYOD | Tagged , , | Leave a comment

The problem of developing consistency in academic team innovation #SEDAconf

My colleague Helen Kay and I worked with one of our course teams last year to role out SCALE UP (Student-Centred Active Learning for Upside-down Pedagogy). The expanded name of the learning space model is very descriptive and clarifies why a shift to SCALE UP will be a challenge for many academics. Every part of this name is a problem that needs clarification: Student-centred? Active learning? Upside-down pedagogy? (The UP refers to flipped learning by the way). And, as the suggestions below indicate, it requires space for exploration and not simply explanation.

The physical nature of SCALE UP is double-edged: it’s a fantastic space for active learning – yet, its strength comes from its physical inflexibility. It is likely to be unfamiliar to staff and students and is pedagogically demanding to those who are unfamiliar with interactive teaching methods.

However, our session wasn’t about SCALE UP per se, but about working with an academic team to develop their collective capacity to adopt it. Their experience of, and interest in, teaching it was diverse, yet each off them was faced with adopting the facility and methodology. They had to develop their practices in ways that would engage their students deeply through problem-based active learning methods.

How does an academic developer address the challenge of introducing a diverse course team to SCALE UP (and all it means), its benefits, design and methods? That was the question we needed to address a year or so ago, and it was the question we posed our workshop participants at the SEDA Conference. Before hearing a summary of their response, I will just note that Helen and myself provided the 8 small groups with a Padlet board (and I invite you to add your own thoughts to it and review what others added). We also produced a set of innovation cards based on Everett Roger’s Diffusion of Innovation model (1962/2003) for participants to use as discussion prompts.

We had 10 minutes at the end to capture the models conjured up by the eight groups. We had a good turn out so this limited the time each group had to feedback to one minute, but I was really impressed that the constraint focused the reports we received which are summarised here:

  • Work with the Course team to identify their desired outcomes – this may be more that the specified outcomes of the development i.e. an advantage of the development may be that it addresses other related matters too. Identify their current challenges and make the link with your offer;
  • Observe the current practice to see how much active learning is already employed in the pedagogy and how this can be enhanced and shared across the team;
  • Draw on literature and other cases where SCALE UP is being utilised;
  • Observe the method being used in other places and contexts to make the concept concrete and to demonstrate what is possible;
  • Identify relevant, evaluated and successful examples from the discipline if possible;
  • Work together as a team to restructure content so it aligns with the method;
  • Facilitate opportunities that involve students and staff in exploring the possibilities of the space together – the space creates an opportunity to consider what active learning might mean to them, what they could do differently and why;
  • Acknowledge their expertise – propose this as a solution;
  • Find examples of successful implementation within the discipline elsewhere and invite them in to demonstrate or talk about their practice;
  • Help the team identify areas of existing practice within the team that already fits;
  • Capture what comes out of the discussion around the affordances of the learning space – and synthesise this for them;
  • Explore what sort of learning will happen inside and outside of the classroom;
  • Top down management, to some degree, and good leadership is necessary if everyone in the team is to accept the legitimacy of the development;
  • Motivation management;
  • Time and space for CPD is needed that is commensurate with the SCALE of development that needs to be made;
  • Incentives (intellectual, university, collegiate, endorsement/recognition)
  • Be inclusive and gather feedback by speaking to everyone;
  • Offer consistent ongoing support.

Thanks to everyone. This list indicates the scale of the challenge of supporting such a shift across a course team. The final message therefore is, don’t underestimate what will be involved.

The workshop slides are available here.

References

Rogers, E. (2003). Diffusion of innovations, 5th edition. New York: Free Press

Posted in Academic innovation, learning space, Possibilities | Tagged , , , , | Leave a comment