
Dr J is teaching year 9 low prior-ability English. They are looking at a quote from a play and suggesting what its connotations are. Dr J focuses on the word dearly and asks the students to write on a white board what dearly might mean to them. They skilfully select from the range of answers and read out correct responses. They then identify an opportunity to check the students grammar and link back to prior knowledge.
“On your whiteboard can you tell me what type of word ‘dearly’ is?” after scanning they notice a few students are struggling so they offer some verbal scaffolding “The types of words you could choose from are noun, verb, adjective, adverb, pronoun or preposition…..3,2,1 show me”
They scan the boards and identify the correct answer as an ‘adverb’.
They then do another brilliant thing. They ask a process question designed to make sure students who got the wrong answer know why ‘adverb’ is the correct answer. “What is our rule for identifying adverbs?” They give wait time and cold call a student who gives the right answer (modifies a verb, often ends in -ly). The teacher praises the student and then moves back into the quotation analysis.
This is just a couple of minutes and it’s jam-packed with great teaching. We have high means of participation by using whiteboards. Clear identification of the correct answers. Checking and linking prior knowledge and process questions to support students who got the wrong answers. They did a great job!
I want to use this example to talk about two things. One is a teaching technique that I think might be missing and the other is about observing lessons and drawing conclusions.
- Staying curious during observations.
When partaking in any aspect of QA or teacher development I try to remember the 4 C’s (Clarity, Culture, Curiosity, Candour). Today I want to focus on curiosity. When I saw this lesson I was sitting at the back of the room and so was unable to see the students’ responses to the question about the adverb. Judging by the fact that the teacher pivoted to a process question I would assume that the success rate on the MWBs was below 75% but I don’t know. So I am curious to find out. This formed the kernel of a potential feedback conversation. Often when I see people observe, they do it with way too much certainty. They look at a teacher and notice something and take the stance of “X was not seen and the teacher should have done it.” This can lead to heavy doses of confirmation bias in the rest of the lesson and often a ‘tick list’ approach to development that can focus on the artefacts seen in the lesson over the thought process the teacher is using. A good way around this is to look for evidence. Adam Boxer has a good blog on the hypothesis model that uses evidence to justify a certain theory on what could be improved. This has a limitation though as sometimes the time is not available to gather said evidence. In the case of this lesson I needed to leave but the students were far from independent practice so going around and asking them “Why is dearly and adverb?” would either need me to wait a long time or return to the lesson later. Both of which were not possible. So another way is to just stay curious about the lesson until you have had the chance to discuss it with the teacher. Then in the feedback conversation have a true dialogue about what they intended, what you saw and the potential areas of improvement. The context the teacher is in plays a huge role in how they make decisions and you will be unable to provide a suitable improvement target without understanding that context better. Maybe they are acting on prior feedback that conflicts with your opinion? Maybe the success rate was high but they wanted to use the process question to check if a student who had found the rule difficult to learn has now got a good understanding of it? We don’t know.
So the first moral of this story is; when observing lessons and noticing areas that might be improved, stay curious. Don’t jump to conclusions, get evidence or have a conversation first.
- It’s great to check for understanding, but it’s better to prove understanding.
The example I have just given gave a brilliant example of a teacher checking for understanding using a really effective means of participation, mini whiteboards. The teacher then uses the process question which again is great. However there was a step missing. In feedback Dr J told me that only about 70% of students had correctly identified it as an adverb, that was why they used the follow up process question. This is where my curiosity paid off. Once they had found the gap and done a brilliant job responding to that data there is a final step that is missing. They need to design an activity that checks that the feedback given has been understood. In our example Dr J only knows that the one student they cold called knows the rules for adverbs. But about 30% of the students didn’t get the right answer. They have now been reminded of the rules to follow, but can they use them? This is where the final activity would come in. In this case I wondered if the teacher could quickly write 4 words on the board and ask the students which ones were adverbs. There could be two correct answers, one with a -ly ending and one without, and two distractors for which one could have an -ly and one not (e.g. hard, slowly, smelly, hollow) . If the students could find the two correct answers then you have a good idea they are now better at applying the rules, if they only find the -ly one then you know they have a superficial grasp and so on. We discussed the pros and cons of this, including if the information was valuable enough for the time it takes and it was agreed it would have been worth doing.
So the second moral of our story is; yes it is great to check for understanding, it’s even better to act on that data and respond to gaps with feedback, but it’s also vital to follow up and get the students to prove they have understood that feedback by demonstrating their understanding.
Let me know your thoughts.

Leave a comment