#ClassicBlogWeek AfL In Science: A Symposium

Thomas Chillimamp
5 min readFeb 23, 2022

--

Firstly, I’m pretty sure that this is cheating as it’s more than one blog. But this symposium of blogs from CogSciSci happened right at the start of my teaching career. It’s something I’ve spent the intervening years mulling over and coming back to. And in all honesty, I still feel like I’m pretty terrible at it, so it’s definitely time for a re-read. I’m just picking out some of the things that felt most pertinent to me right now, but there’s a lot to be gleaned from every single piece of the symposium.

My key takeaways:

  • The feedback we give needs to ensure students have some concrete action to improve their work that goes beyond generic statements.
  • The timing of AfL is crucial to ensure we’re actually assessing learning and not just performance.
  • To give quality feedback, we need to understand the structures of knowledge itself (whether declarative or procedural).
  • Considering scaffolding to be a subtle support to help students get across the current state of their schema. If we over scaffold, then we learn nothing about student understanding, so AfL is useless.

What I’d like to see as a point of development for everyone is a greater number of blogs looking at the specific decisions we’ve made in lessons in response to AfL. Whilst there are a great number of “how I teach X” blogs coming out, I think a very powerful addition would be blogs that look more like “I did some AfL, here are the student responses. To me, this highlighted a lack of understanding of X, so what I did next was…” One of the things not stressed in the symposium was that feedback may well take the form of simply reteaching an idea and it’d be great to hear ideas about how we do this in our lessons (particularly in challenging circumstances when, say, only 25% get a question wrong but clearly need more support).

The entire AfL In Science symposium can be found linked from Adam Boxer’s blog. The symposium starts with Adam pointing out that since the AfL revolution, there’s been little positive national impact (despite all of the positive research that underpins it). I would hazard a guess that this is still roughly true. I still get the sense that a lot of people are ‘doing AfL’ in lessons by doing the nuts and bolts of the assessment bit — asking a question or setting students off on a specific task — but the answers from the students have no real impact on the future direction of the lesson. I’m always conscious of this when I ask a question recently (and fall foul of it all the time) — if I’m asking this question, what am I actually going to do if they get it wrong?

In one of the linked posts, Daisy Christodoulou highlights that often AfL hasn’t led to student progress as the feedback we give isn’t of the right “type and quality”. Education’s previous abstraction of skills into level ladders, for example, and the resulting overly generic feedback — “you need to evaluate better” — gave students no concrete way in which to improve. Whilst lots of levelled materials have slowly faded away and died their inevitable deaths (although certainly not all), I don’t think this has meant that our feedback to students has improved drastically. If I think of my most recent lessons, the form of feedback that I most commonly give is to simply go over the correct answers (either with or without student input). This is necessary but not sufficient. If a student got the answer wrong, is just seeing the right answer enough useful feedback to allow them to progress? In some cases the answer will be yes, but not generally. Often within science, incorrect answers will be indicative of deep-rooted misconceptions that need to be addressed. I’m minded to think about Ben Rogers’ work on refutation texts as an area I need to use more confidently in my practice.

In a further linked post, Dawn Cox points out that a further difficulty in the use of AfL — timing. Within a given lesson, students are far more likely to get a question right as they will be performing without having necessarily learned the material. This distinction between performance and learning is summarised nicely by Blake Havard (more of a #FutureClassicBlogs though). So our AfL within the lesson is actually more of an AfP (Assessment for Performance). To truly swap the P for an L, we need to ask the question at a later date to allow time for some forgetting to have happened. This idea — spacing — is crucial to ensuring that what we’re assessing is truly learning. The very idea of this still makes my head spin, because it has huge implications for how we plan our schemes of learning and how far ahead a teacher should be reasonably able to plan.

Ruth Ashbee gave the first proper post of the symposium, looking at the structure of knowledge in science. I remember at the time not fully appreciating the essay as I was so new to teaching. It was the first time I’d come across the ideas of declarative and procedural knowledge, and I was still very new to the idea of schemata. I think the wider adoption of the ‘proper language’ around epistemology is a great credit to the profession and Ruth has been at the forefront of this. She argues eloquently about the structure of “school science knowledge (SSK)” and rereading it now it is so clear how useful the distinctions between declarative and procedural knowledge are when it comes to feedback.

We can therefore argue that when “assessing SSK for learning” we are looking at and giving feedback on:

* Pupils’ representation of correct conceptions/declarative knowledge

* Pupils’ knowledge of specific exemplars

* Pupils’ ability to make valid inferences from declarative knowledge and exemplars to related areas

* Pupils’ ability to relate different items of declarative knowledge to each other

* Pupils’ memory of the correct procedure to be followed

* Pupils’ application of the correct procedure

I think this list highlights the huge interaction between forgetting, performance, and misconceptions, all of which can prevent students from “getting it right” and therefore require feedback from the teacher. But this feedback likely varies massively from bullet point to bullet point, and we need to be aware of this. This also brings to mind Adam Boxer’s “What to do after a mock” blog — if a student gets an exam question wrong, what valid inference can we make as to the reason why?

Matt Perks’ piece outlines some practical ways in which AfL can be done in science. He highlights those same themes that come up throughout the symposium — if a student gets a question (particularly an extended question) wrong, what can we infer about their understanding? What feedback can be given to actually help the student? A key takeaway for me is “what will good look like?” and planning this out beforehand, as well as thinking about the interaction between scaffolding and AfL. Matt suggests a number of ways in which he scaffolds work to allow students to put their ideas down, so that we can give feedback on their developing schema. I think this is a really useful way to think about scaffolding — a subtle support to help students get across the current state of their schema.

These classic blogs had a huge influence on my teaching, and have lived strongly in my memory ever since. I’m sure this won’t be my last re-reading.

--

--

No responses yet