A different model for reflection

At the moment I’m studying for a Masters in Online Teaching with the Open University. This week was the end of a course (the module is divided into four courses), meaning I handed in my assessment and turned to reflection. This course has the coolest reflection task I have ever seen.

Firstly to say that every week we are encouraged to reflect on the week using a: what happened, so what, what next framework, so reflection is baked into the course throughout.

In the final reflection, we were asked to rate each of the course topics against:

  • Our understanding of the topic.
  • Our interest in the topic.
  • How useful the topic is to our practice.

I created a quick chart to visualise mine. I took out understanding as that would detract from the interesting bit: which topics are the most interesting/relevant.

A stacked bar chart with an X axis 0-10. The following scores are given to each topic:

Citizen science: Useful 1, Interest 1.
MCQs: Useful 1, Interest 1.
Definitions of TEL: Useful 3, Interest 1.
Conceptions of Learning: Useful 3, Interest 2.
E-moderating: Useful 2, Interest 3.
Peer marking: Useful 3, Interest 3.
Learning at scale: Useful 4, Interest 4.
Mobile learning: Useful 4, Interest 4.
Types of assessment: Useful 4, Interest 4.
Open learning: Useful 4, Interest 5.
E-portfolios: Useful 4, Interest 5.
Collaborative learning: Useful 5, Interest 5.
How useful and interesting I found the course topics (1-5, 1=negative, 5=positive)

No big surprises to me – the highest scoring were learning communities and collaborative learning and the lowest citizen science and MCQs (both of which I found rather irrelevant and dull in comparison).

Interestingly, the top four scoring (adding open learning and e-portfolios to the ones already mentioned) were the ones I based my assessment on, which I will link once I get the mark back. I’m not sure if I chose them because they’re interesting and relevant, or whether they became more interesting and relevant based on my choice to research them further. A little bit of a chicken and egg situation.

After this initial data gathering, we were then given an option of what to do from:

  • Spending more time on any topic with a low understanding score.
  • Further research on a topic with a high interest score.
  • Creating a plan to apply the knowledge from any topic with a high usefulness score.
  • Considering how to make any topic with a low interest score more engaging.

The differentiation here was evident: learners get to focus on the thing that they want to. The course authors are recognising that some people may not have understood everything, that others want to research more (hello Honey and Mumford’s Theorists) and that others want to apply what they’ve learnt (likewise, hello Pragmatists). 

If I can find a way to apply a similar model in one of my own courses, I definitely will. I have to say, I was thoroughly impressed.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top