Why the knowledge/skills debate is worth having


‘I note the obvious differences between each sort & type, but we are more alike, my friends, than we are unalike’.

Maya Angelou

I’ve come an awful long way since September 2011 when Cristina Milos took the time to point out that my view on the teaching of knowledge and skills were seriously skewed. I’m flabbergasted that, as an experienced teacher, I could have been so ignorant. I said at the end of that post that “I guess my conclusion isn’t that skills are more important than knowledge: rather that both are required for mastery of a subject.” But I didn’t really believe it. If you scroll down the comments of the that post you can see how politely and tolerantly Old Andrew points out that maybe I had it wrong.

Two months later I posted this, in which I advocated SOLO taxonomy as a means of squaring the knowledge/skills circle. The sad fact was that I really had very little idea about teaching beyond the experience of my own classroom. I’m a reasonably intelligent and articulate individual and it seemed reasonable to believe that what I knew was some way representative of what was true. And indeed it seemed so: most of the teachers who responded to my blogs were positive and sympathetic. but it shocked me to come across people who dismissed what I held dear as pap.

I carried on with the SOLO taxonomy line for a while despite one or two concerns at how it was misappropriated until considering this question: What happens when a student “establishes a relational construct which is wrong”? The answer, of course, is to go back to their store of knowledge and correct the misapprehension. This led, inexorably, to the realisation that the usefulness of SOLO was entirely dependent on the quality of knowledge students possessed. Students are asked to make relational connections and abstract constructions at every Key Stage and beyond. The only difference is the quality of what they know. Finally, the penny dropped; teaching students how to analyse in isolation is pretty pointless. They need to have some thing to analyse. And if this sounds blindingly obvious to you, consider the fact that I had been teaching for over a decade and was a head of department.

The point I’m clumsily and longwindedly trying to make is that anyone out there who feels it’s unimportant to discuss the importance of knowledge in teaching (especially in teaching English) is probably unaware that for the vast majority of teachers all this is going to comes as rather a shock. By and large, we who tweet and blog are at the cutting edge of thought and discussion in education, comparatively speaking. We have our fingers resting lightly on the pulse of cognitive science and we know whereof we speak. [style note: flowery prose indicates irony] If after consideration of all this you arrive at a position where you can align yourself with a constructivist  approach to teaching, then I applaud you; you have obviously considered the pros and cons and have arrived at a more or less informed decision. The right and wrong of it doesn’t matter nearly so much as having had the discussion.

But whenever you’re feeling judgemental about this essential debate cropping up on Twitter yet again, remember this: you are not typical. Most teachers have never, through little fault of their own, considered that some of the fundamental premises of how they teach are in doubt. Without access to the collective wisdom of Twitter it’s actually quite hard to find stuff out. Most CPD is still about how to talk less and show progress in 20 minutes. How do I know? Last week I attended three training events and told teachers about the Teaching Sequence for Developing Independence I’ve been writing about and they were stunned. Not because this stuff is revolutionary or groundbreaking in anyway, but because it’s commonsense. We have all suspected that a lot of what we’re told by SLT about what they think Ofsted might want is obviously bonkers, but we sigh, and shrug and admit that well, hey, what do we know? These guys are the experts and clearly they must know what they’re on about.

Mustn’t they?

It’s now clear to me that if we want to stop the predations of snake oil salesmen and Ofsted whisperers we must reclaim our expertise. We must boldly and confidently state that no one knows our students in our classrooms better than we do. We need to be able to counter any accusations that we talked too much or that our students were insufficiently independent by explaining that here is where they will be independent and in order for that to happen I need to actually teach them here. And if anyone ever feedbacks back on a lesson observation by saying “I wouldn’t have done it like that. I’d have done ….” we need to find a polite but assertive way to ask them to explain precisely how and why their views differ from Ofsted’s supremo, Sir Michael Wilshaw’s.

As many have observed, the debate is circular: of course procedural knowledge is equally important as propositional knowledge. My point is that this truth is not the point. We all need to find our own way to it and make our own peace with it. The fac that the destination is already known to some, doesn’t mean others should not embark on the journey. The aim is to think.

So, if we want to be taken seriously we need to know what we’re talking about. Sapere aude – dare to know!

Related posts

Independence vs independent learning
Teacher Talk: the missing link
The teaching sequence for developing independence

Independence vs independent learning.

Sometimes it just is.

Sometimes it just is.

Last weekend #SLTchat was on fostering students’ independence. As you’d expect, there were lots of great suggestions shared, as well as some not so great ideas. One comment I tweeted in response to the idea that to promote independence we should get students learning independently got quite a lot of feedback:

Screen Shot 2013-06-20 at 08.43.18

This seemed to really divide opinion; some people got upset with me, and some others agreed enthusiastically.

Having read Daisy Chistodoulou’s fabulously well-researched, cogently argued and clearly expressed eBook Seven Myths About Education, my thoughts on teacher talk and independent learning have started to coalesce.

On Tuesday this week I was training some particularly lovely teachers on embedding literacy across schools and we got into a great discussion about teacher talk and independent learning. Everyone ended up getting pretty angry about the fact that they feel forced to conceal their teaching of knowledge. We all know that sometimes students need us to explain new concepts if they are going to have any hope of understanding them. Equally, we all ‘know’ that Ofsted and SLT want to see independent learning and minimal teacher talk. If you’re unconvinced, have a read of this and this.

Daisy points out how ridiculous this is with particular reference to Ofsted’s critique of MFL lessons:

In nearly all these lessons, pupils are praised for knowing things and doing things spontaneously. The methods used to teach them are never mentioned; indeed, the impression we get is they they were never taught. Pupils are praised for having ‘took to spelling in French’. If it were really the case that they took spontaneously to spelling in French, why would such pupils need a school or a teacher? If, as I suspect, their ability to spell in French is actually down to teacher instruction and explanation that happened prior to the Ofsted inspection, then such descriptions are highly misleading and even dangerous.

The implication is that in good or outstanding lessons students will just know stuff. There should be no need for teachers to explain anything; if a teacher does explain something they are often criticised for talking too much. This insanity has lead to teachers showcasing lessons which don’t require students to know anything outside of their own experiences. And this leads, inexorably, to stuff like writing persuasive letters to headteachers about school uniform. Teachers, especially in observed lessons, are unwilling to risk teaching students anything new because this would require them to speak for too long and wouldn’t demonstrate students’ ability to learn independently. If this is true, and instinctively, I think we know it is, Ofsted as an organisation is largely, if not entirely, responsible for the ‘dumbing down’ Michael Gove criticises schools for perpetrating.

Anyway, back to the literacy training. We were discussing the teaching & learning sequence for developing students’ academic language, used by Lee Donaghy at Parkview School in Birmingham.

Screen Shot 2013-06-19 at 23.08.51


I pointed out that stage 1 and 2 are very difficult, if not impossible, to properly accomplish without talking. Stage 1 requires teachers to explain a new concept clearly and precisely, providing subject specific vocabulary and structures. You could possibly use a text book or worksheets to do this but I doubt that Ofsted would approve of this approach any more than they would of the teacher having to talk. You can, in some circumstances use a jigsawing approach, but to do this too often is inefficient, open to students’s misunderstandings, and would soon become dull. How much easier would it be to simply skip this step and go strait to some independent learning about something your students already knew?

And then we have the modelling and deconstruction stage of the teaching cycle. Here the teacher will explicitly show the students how a text works within the domain they are studying. There are lots of ways to reverse engineer texts to show how they are structured and some of these can be done independently with students discovering the structures for themselves. But when I trained to be teacher the Teaching Sequence for Writing, which involves lots of deconstruction and modelling, was considered best practice. In this National Strategy document from 2008, teacher talk is held up as an essential component of teaching writing and is described as “the verbalisation of the reader and writer thought processes involved as the teacher is demonstrating, modelling and discussing”. It goes without saying that verbalisation is quite hard to do without talking.

The joint construction phase is all about scaffolding language and providing students with enough guidance for them to attempt a task successfully. We could at this stage just let them get on with it independently. They might, through a process of trial and error, be able get the hang of constructing an academic text. It’s more likely though that they will revert to using everyday, non-academic language and produce a response which is lacking in the qualities we look for in the most able. So, the problem in doing this is one of low aspirations for our students. Making them practice independently before they are clear on what to do runs the risk of encoding failure. Practice does not make perfect, it makes permanent. Without sufficient instruction from an expert, students will get good at doing tasks badly. And who, in their right mind wants that? That said, there’s a fair bit of scope for independent learning here and Ofsted might approve of a lesson at this stage of the cycle as long as the teacher was able to conceal the fact that at some stage they had had to talk in order to explain the concept being studied, deconstruct examples and model expert processes.

Then, at stage four, students will be ready to work independently. If Ofsted came to observe a lesson at this stage of the cycle they might expect to see the students working in silence and the teacher with his feet up drinking coffee. Or even better, in my classroom they would find me working silently as well. As I’ve explained in the past, modelling every part of the process is important and I take care to write with my students at every available opportunity. This is true independence. This is what we teaching students to be able to do. We want them to have the confidence and ability to complete tasks by themselves without us there to nag and prompt them. It is, however, at complete odds with the nonsense that’s pedaled as ‘independent learning’ and is predicated on teacher sbeing able to talk to their students.

Tom Sherrington wrote recently about observing teachers at his school and how single lesson observations are a poor way to judge a teacher’s quality. If you only ever saw teachers presiding over group work and independent learning this might a serious cause for concern. Likewise if you only ever saw a teacher engaging in direct instruction we would worry about students’ lack of opportunity to practise what they’d learned. Teaching through direct instruction and independence are part of the cycle. Trying to shoe-horn this cycle into a single lesson robs students of the time they need to develop their thinking and engage in extended practice.

This is all bound up with the myth that learning is neat and takes place conveniently in 50 minute or 1 hour chunks. It doesn’t. Teachers know this. We know, at certain stages in a topic, that we will want our students to do different things if they ever going to be independent. Sometimes this will require us talk, sometimes it won’t. But to expect every lesson to show evidence of independent learning is madness.

As Tom says:

If you drop in on a lesson that is part of a sequence, you need to ask some questions:

 – Where does the lesson fit into a sequence? Where are they along the arc?

– Is this learning activity compatible with an overall process that could lead to strong outcomes?

– Is it reasonable for progress to be evident within this lesson or might I need to see what happens over the next week or so?

– What general attitudes and dispositions are being modeled by teacher and students? Do they indicate positive learning-focused relationships compatible with an overall process that leads to strong outcomes?

– Does the record of work in books and folders, with the feedback dialogue alongside the work itself, tell a better story than the content of the one-off performance in front of you?

Independence is the end, not the means.

If we really want our students to be able to use language with facility, solve complex equations, and spell in French we need to avoid the pointless horror and inherent low expectations of ‘independent learning’ as a thing. Compelling teachers to talk less and facilitate students’ independent learning has the unfortunate consequence of making students less independent. If we really want to promote students’ independence we need to train teachers to model and explain more effectively and encourage them to practice these vital, and sadly neglected skills.

Related posts

Teacher talk: the missing link
Mind your language – a language based approach to pedagogy
The problem with progress Part 1: learning vs performance

Testing & assessment – have we been doing the right things for the wrong reasons?

A curious peculiarity of our memory is that things are impressed better by active than by passive repetition. I mean that in learning (by heart, for example), when we almost know the piece, it pays better to wait and recollect by an effort from within, than to look at the book again. If we recover the words in the former way, we shall probably know them the next time; if in the latter way, we shall very likely need the book once more.

William James, The principles of psychology (1890)

Never stop testing, and your advertising will never stop improving.

David Ogilvy


Instead, study less, test more

Tests are rubbish, right? Like me, you may find yourself baring your teeth at the thought of being drilled to death, or inflicting endless rounds of mind-numbing tests on your students. That’s no way to learn, is it? All that’s going to do is produce ‘inert knowledge’ that will just sit there and be of no use whatsoever, right? Wrong. Apparently, the ‘retrieval practice’ of testing actually helps us induce “readily accessible information that can be flexibly used to solve new problems.”[1]

Most tests are conducted in order to produced summative information on how much students have learned and as such have (possibly rightly) attracted lots of ire. But maybe this is a very narrow way to view the humble test.

In my post on desirable difficulties I reported the following nugget:

We think we know more than in fact we do. For instance you may well have some pretty fixed ideas about testing. Which of these study patterns is more likely to result in long term learning?

1. study study study study – test

2. study study study test – test

3. study study test test – test

4. study test test test – test

Most of us will pick 1. It just feels right, doesn’t it? Spaced repetitions of study are bound to result in better results, right? Wrong. The most successful pattern is in fact No. 4. Having just one study session, followed by three short testing sessions – and then a final assessment – will out perform any other pattern.

This is something I’ve only just begun to research and experiment with, but the implications are fascinating. One of the first things I needed to reconsider was what might constitute at test. That is to say, I had to move away from the limited definition of testing being merely a pen and paper based exercise conducted under exam conditions. Testing can (and should) include some of the tricks and techniques we’ve been misusing and misunderstanding as AfL for the past 10 years or so. In fact, it doesn’t really matter how you test students as long as your emphasis changes; testing should not be primarily used to assess the efficacy of your teaching and students’ learning, it should be used as a powerful tool in your pedagogical armoury to help them learn.

Maybe this is really obvious and everyone else has always understood the fundamental point of classroom assessment, but I don’t think so. Everything I’ve read (and I’ve read a fair bit) indicates that the point of AfL is find out what students have learned and to adjust your teaching to fill in any gaps. This deficit model means that teachers (and students) might be labouring under some quite fundamental misunderstandings.

They are:

1) The Input/Output Myth – what teachers teach, students learn. Learning appears to be waaaay more complicated than this myth suggests.

2) Classroom performance equates with student learning. It doesn’t. Learning takes place over time and can only be inferred from performance

3) Students will retain what they’ve learned. They won’t. Students will forget the vast majority of what you teach and what they do remember will be largely unique to individuals.

If we just carry on waving our lolly sticks about, festooning students with Post-it notes and smugly getting them to fill in exit passes, what will we accomplish? Well, if cognitive science is correct about the human mind and how it learns, the answer might be: precious little.

So, should we chuck out the baby with this particularly gritty bathwater? How about if instead we rethought the purpose of assessment and considered how our AfL toolkits might actually benefit learning instead of just monitoring performance.

This paper on Ten Benefits of Testing and Their Applications to Educational Practice is a good starting point. The benefits are organised into direct effects on retention and indirect benefits on meta-cognition, teaching and learning. Whilst all are interesting and worth perusing, the purposes of this post I’m just going to discuss how I’ve been trying to use the direct benefits of testing.

Mean number of idea units recalled on the final test taken 5 min or 1 week after the initial learning session

Mean number of idea units recalled on the final test taken 5 min or 1 week
after the initial learning session

The Testing Effect: retrieval aids later retention – the is the claim  made above that studying material once and testing three times leads to about 80% improved retention than studying three times and testing once. The research evidence suggests that it doesn’t matter whether people are asked to recall individual items or passages of text, testing beats restudying every time. Now, we all know that cramming for a test works, hut what theses studies show is that testing leads to a much increased likelihood that information being retained over the long term. The implication is that if we want our students to learn whatever it is we’re trying to teach them we should test them on it regularly. And by regularly I mean every lesson. What if every lesson began with a test of what students had studied the previous lesson? Far from finding it dull, most students actually seem to enjoy this kind of exercise. And if you explain to them what you’re up to and why, they get pretty excited at seeing whether the theory holds water. And what of accusations that this might lead to instances of The Hawthorn Effect? Frankly my dear, I couldn’t give a damn! I’m not a researcher and I’m not trying to prove anything; I just want to take advantage of something that’s already been proven.

Testing causes students to learn more from the next study episode – this is also pleasingly referred to as ‘test-potentiated learning’. Basically it means that having followed a Study Test Test Test (STTT) pattern of lessons, the next STTT pattern will result in even better retention: the more test you do, the better you are at learning!

This particular field of study belongs to Hideki Izawa who began by investigating whether learning was actually taking place during testing.  She examined three hypotheses:

1) During a test students will neither learn nor forget

2) Learning and forgetting could occur during a test

3) Taking a test might influence the amount of learning during a future study session.

Guess what? Propositions 1 and 3 turn out to be correct. But doesn’t this contradict The Testing Effect? Well, apparently not; the testing effect can be interpreted as a slowing of forgetting after the test. And the real kicker is that this potential improvement occurs whether or not students get any feedback on their tests!

Testing improves transfer of knowledge to new contexts – this one is the Grail! One of the myths Daisy Chrisodoulou’s new book Seven Myths About Education is that we should teach transferable skills. She argues the following:

Skills are tied to domain knowledge. If you can analyse a poem, it doesn’t mean you can analyse a quadratic equation, even though we apply the word ‘analysis’ to each activity. Likewise with evaluation, synthesis, explanation and all the other words to be found at the top of Bloom’s Taxonomy. When we see people employing what we think of as transferable skills, what we’re probably seeing is someone with a wide-ranging body of knowledge in a number of different domains.

But what if testing could improve the transferability of skills and knowledge? What then? Can retrieval really help the transferability of knowledge?

Let’s start by defining ‘transfer’. How about, “applying knowledge learned in one situation to a new situation”? And let’s be a little more cautious than the example of ‘far transfer’ given by Daisy above. Can we teach students how to analyse non-fiction texts and then expect them to be able to analyse poetry? This is a real bugbear of mine because, frustratingly, it’s hard. Within a ‘skills-based’ subject like English we ought to be able to do this. But, year after year, I’ve found myself stymied by students’ damnable inability to see that analysing in one context is exactly the same as analysing in another. Rebranding the skill as ‘zooming in’ has helped but it’s still an uphill struggle; they need constant prodding and reminding.

Ebbinghaus was experimenting the transferability of skills way back in 1885, and more recently Barnett and Ceci (2002)went as far as proposing a taxonomy for transfer studies which attempt to describe the dimensions against which transfer of a learned skill might be assessed.

So could testing make the difference? There’s been a number of different studies on the effects of testing on the ability to transfer skills and there’s lots of evidence for ‘near transfer’ and Butler (2010) has shown that ‘far transfer’ (transfer to new questions in different knowledge domains) may be possible:

In this experiment, subjects studied prose passages on various topics (e.g., bats; the respiratory system). Subjects then restudied some of the passages three times and took three tests on other passages. After each question during the repeated tests, subjects were presented with the question and the correct answer for feedback. One week later subjects completed the final transfer test. On the final test, subjects were required to transfer what they learned during the initial learning session to new inferential questions in different knowledge domains (e.g., from echolocation in bats to similar processes used in sonar on submarines).

The results showed that subjects were more likely to correctly answer a transfer question when they had answered the corresponding question during initial testing. Is this conclusive? Maybe not, but it’s compelling. I don’t think my teaching of analysis in English is going to result in my students being better able to analyse quadratic equations, but if it helps them transfer between non-fiction and poetry I’ll be chuffed.

Testing can facilitate retrieval of material that was not tested – yes, you heard it: taking a test will help you remember even the stuff that wasn’t actually tested. This concept of ‘retrieval-induced facilitation’ sounds almost magical and seems at odds with Bjork’s theory of ‘retrieval-induced forgetting. But the contradiction only exists in the short term; the more incidences of re-testing and the longer you leave the final test (at least 24 hours) results in clear improvements of material that has not been tested in the STTT pattern of learning.

I’m right at the beginning of all this, but it looks like testing is the way forward if I want to make sure my students remember (that is to say, learn) the stuff I’m teaching them. I’ve already started get students to summarise what they’ve learned in a paragraph at the end of each lesson and setting homework designed to test students’ recall of lesson content. Also I’ve begun tinkering around with concept maps to see how they can be used as testing tools.

At the beginning of the year I was preparing to junk a lot of what I’d come to believe was best practice. Turns out, all I need to get rid of are my misconceptions about what assessment for learning might actually be for. Maybe it really could be for learning and not just performance!


So, what does ‘gifted’ mean anyway?

funny-mind-perfect-scenarios-imaginaryAs you may be aware, non-selective secondary schools are failing the ‘most able’. How do we know? Because a brand new Ofsted report tells us so.

The report’s key findings include such revelations as the fact that “expectations of what the most able students should achieve are too low” and  that not enough has been done “to create a culture of scholastic excellence” which leads, unsurprisingly, to, “Many students become used to performing at a lower level than they are capable of.”

The problem is attributed to ineffective transition arrangements, poor Key Stage 3 curricula and early entry to GCSE exams. Homework also gets a bashing; too much of it is “insufficiently challenging” and “fails to interest students, extend their thinking or develop their skills.”

The result is that “just over a quarter of the pupils who achieved Level 5 in English and mathematics at the end of Year 6 did not make the progress expected of them in their non-selective secondary schools”.

So that’s that: QED.

Now you may well quibble, as Geoff Barton has done over their means of measurement and the woeful laziness of many newspaper reports but really, I find it hard to argue with the likelihood that secondary schools’ expectations of their students are way too low.

I know I’ve certainly been guilty of this. A few years ago I taught a girl called Charlotte. Charlotte had an E grade target and, I confess, my expectations of her were low. In the opening weeks of Year 10 she told me that she wanted to get an A grade and I, to my shame, tried to manage her expectations and let her know that his was unlikely. It would be enough of a miracle if she were to managed a C! The first piece of coursework she turned in was a lowly D. She was devastated. She took it away (remember, this was the old days) acted on my advice and handed in a C grade essay. I was chuffed; she was still devastated. She took her English GCSE at the end of Year 10 (something else we’re now no longer allowed to do) and got a C. Two grades above her target grade. By this time I knew she’d be gutted with this result, and she was. She still professed cock-eyed, unwavering faith she could get an A. But, I knew she couldn’t. Obviously.

I’m sure you can see where this is going, can’t you? Charlotte continued to plug away and retook the exam in November getting a B grade. C’mon, I told her. Enough’s enough. Be happy with your B. But she wasn’t and retook for a third time in June of Year 11. And still she didn’t get an A.

She got an A*.

Now you can say what you like about lack of challenge and low expectations and the wonky Key Stage 3 curriculum and early entry being the enemy of promise; this girl was a grafter. She believed that she could be better than she was. No one ever told her she was gifted at anything, and she didn’t care; she knew that if she worked hard enough she could get what she wanted.

Of course for every Charlotte I’ve taught a thousand kids with nothing like her mindset or capacity for trying. Many of these were identified as G&T and went on to coast a B grade or similar. But Charlotte taught me far more than probably I ever managed to teach her. She taught me that my expectations were, for most kids, a determining factor in their achievement. And what’s the point in having high expectations for just some of our students? Where on earth is the sense in picking off our ‘most able’ 10% and deciding to push this elite to scholastic excellence? Charlotte taught me that this was a nonsens and that effort trumps talent.

Tom Bennett gives us The Orthodoxy in his TES article How best are the gifted lifted? Above average, but below the radar: the problem of G&T kids. His solution is based on the following these 3 familiar steps:

1) identify your potential brainiacs,

2) provide something special for them, and then

3) monitor that this hothousing is having the desired effect.

I have absolutely no argument with point 2 and 3. None. But, oh my goodness! I’m not at all happy with point 1. To his credit Tom does say that “high expectations should be tattooed inside our hearts for every child, until the minute they leave school for good – maybe not even then.” Quite right. But how does corralling the boffins and treating them differently serve this aim?

Take out the word ‘gifted’ and this could be a marvellous manifesto:

Teachers need to be trained more clearly on simple techniques that can revolutionise the work a gifted pupil does, eg setting them tasks a year above their age; accelerating them into the year above (astounding, but caution required); asking for work to be redone – after school if necessary – if it doesn’t meet the required level. Forcing yourself to give them time in lessons to explain things at a higher level, just as if they were as important as a weaker kid (fancy that); setting slightly different homework, and so on.

I get that Tom, and Ofsted for that matter, are berating us for chasing the grail of the C/D borderline, but still.

Grammar school head Tom Sherrington talks about having a Total G&T Philosophy and how lessons should be designed with rigour and high expectations to ‘lift the lid’ to what students can achieve. He advocates that we should ‘teach to the top’ instead of the usual slow ball middle pitch TomBennett describes. And yes, teach to the top, support at the bottom. Everyone’s aspirations are raised and they start to believe in they can achieve more than they ever believed possible.

I had the pleasure of hearing Mr Sherrington speak about his approach to teaching yesterday and one throwaway line got me thinking. He said something about teachers often removing layers of complexity because kids would be turned off if the work seemed too hard. Instead he suggested giving kids work which seems impossible. What if we scrapped our Year 7 curriculum and just taught ’em the Year 8 stuff? Would it matter? And this got me thinking: maybe I could try teaching work which ‘seems impossible to see what’s possible’?

Last week I taught a transition lesson to a class of Year 6 students to prepare them for the ‘step up’ to big school. I didn’t find out I would be doing this until that morning and just for the hell of it I decided to teach them a lesson I’d taught to my Year 11s on analysing Shakespeare’s Sonnet 116. At the end of the lesson I asked them to tell me on a scale of 1 – 10 (1 being insultingly easy, 10 being ear bleedingly difficult) and guess what? They gave me a 5! When I told them the lesson’s provenance I’m not sure if they were more impressed with themselves or disappointed by the lack of challenge presented by GSCEs. The point was, I treated them all as if they could do it and, by God, they could do it!

If I’d told ’em in advance that they were going to tackle poetry from the GCSE Literature anthology it might have ‘seemed impossible’. But maybe (maybe) they’ve seen what’s possible.

What if they’d been streamed as Wilshire suggests and all the ‘gifted’ kids fed a steady diet of ‘scholastic excellence’? What kind of message does this give to everyone else?

Every year I expect my students to get A grades. And every year I’m disappointed when some don’t. I’m sure, come August, I’ll be disappointed again. Never mind, next year I can try to fail better.

Related posts

How to subvert target grades
Redesigning a curriculum
The Grand Unified Theory of Mastery

Deliberately difficult – why it’s better to make learning harder

The most fundamental goals of education are long-term goals. As teachers and educators, we want targeted knowledge and skills to be acquired in a way that makes them durable and flexible. More specifically, we want a student’s educational experience to produce a mental representation of the knowledge or skill in question that fosters long-term access to that knowledge and the ability to generalize—that is, to draw on that knowledge in situations that may differ on some dimensions from the exact educational context in which that knowledge was acquired.

Robert A Bjork, 2002

Who could argue with this? Certainly not Ofsted who happily claim in their most recent Inspection Handbook,”The most important role of teaching is to promote learning and to raise pupils’ achievement.” Quite right.

This is, after all, what teaching is fundamentally about. Maybe you have other aims, maybe you consider education to have different purposes, but if we’re not promoting learning and raising achievement what on earth are we doing?

But then they go and spoil it all by boldly stating that outstanding teaching and learning will result in “almost all pupils … making rapid and sustained progress.”

This statement inevitably begs two questions:

1) If Ofsted judge T&L by observing  lessons, what does progress in lessons look like?

2) Can progress be both rapid and sustained?

The one word answers to these questions are:

1) Performance

2) No

The reason for the confusion is what I’ve termed The Input/Output Myth. We labour under the misapprehension that what we teach, students will learn. Regrettably, the truth is a whole lot more complicated than that.

The Input/Output Myth: If only!

The Input/Output Myth: If only!

Graham Nuthall in his marvellously erudite tome, The Hidden Lives of Learners observes that “as learning occurs so does forgetting”. This is bad enough, but on top of that is the bewildering discovery that most student learning is unique. In the highly structured word of the classroom the ‘items’ learned by no more that 1 other student range from 44.1% to 88.9%. That is to say that on most occasions, well over half of what we teach is not learned by the vast majority of our students. Terrifying! How can we possibly keep track of their progress?

Progress: the tip of the iceberg!

Progress: the tip of the iceberg!

Nuthall suggests that there are 3 different ‘worlds’ at operation in a classroom. There is the visible world of the teacher, the murky, mysterious world of students’ peers, and there’s the rarely glimpsed, private word of the individual student. We get to see our teacher, we get to see the students answering questions and performing task designed to demonstrate their progress but we seldom, if ever, get see inside students’ heads. We literally have NO IDEA what’s going on in there. And any attempt to claim otherwise is foolishness.

So what do we do? We fall back on the comforting sureties on the Input/Output Myth and convince ourselves that students’ performance correlates with their learning. It doesn’t. As Robert Bjork says, “Performance is measurable but learning must be inferred from performance: it cannot be observed directly.”

What can be done?

If we really want to get a true measure of our students’ progress, promote learning and raise students’ achievement (and we do, don’t we?) than we must do two things:

1) Separate performance from learning

2) Introduce ‘desirable difficulties’

The first is simple. But hard. We need to be weaned from the belief that we can observe progress in 20 minutes, or even a lesson.

There is no such thing as progress within lessons. There is only learning.

Kev Bartle


Learning is a liminal process, at the boundary between control and chaos.

Dylan Wiliam

Basically, we must accept that sometimes learning occurs but performance in the short term doesn’t improve, and that at other times, performance may improve, but little learning seems to happen in the long term.

The second is difficult, but desirably so. I love Bjork’s coining, ‘desirable difficulties’ because it gets to the very heart of the counter intuitive nature of learning. It turns out that making it more difficult for students to learn means that they actually learn more!

If you’re after rapid improvement (performance) then you make your teaching predictable, give students clear cues about the answers you’re looking for, and do a whole load of massed practice. If you watch that lesson it looks great! The teacher is happy, the students are happy and the observer can tick delightedly away at their clipboard. Come back and text them next week, next month, next year and the situation is a little more bleak.

On the other hand, if you after sustained improvement (learning): then you want to introduce as much variability into your teaching as possible; change rooms, change seating, change displays: remove the comforting and familiar background to lessons, and introduce spacing and interleaving to redesign your curriculum.  These ‘desirable difficulties’ will slow down performance but lead to long term retention and (Daniel Willingham’s Holy Grail) transfer of knowledge between domains.

But therein lies the problem: everyone prefers the feeling of ‘rapid progress’. The route to sustained progress feels uncomfortable. We have to delay gratification. We have to take the risk that an observer won’t tick the ‘progress’ box on their observation pro forma. We might look bad. So we don’t do it.

But let’s assume that you’re willing to take the risk. What would it look like?

Here’s a list of suggestions:

– Spacing learning sessions apart rather than massing them together

– Interleaving topics so that they’ve studied together rather than discretely

– Testing students on material rather than having them simply restudy it

– Having learners generate target material through a puzzle or other kind of active process, rather than simply reading it passively

– Varying the settings in which learning takes place

– Reducing feedback (sometimes!)

– Making learning material less clearly organised

Making texts more challenging to read

What all these difficulties have in common is that they encourage a deeper, more complex processing of material than people would normally engage in which makes information more likely to transfer from working to long term memory.

Bjork’s come up with what he rather unimaginatively calls the New Theory of Disuse. This suggests that memory doesn’t decay, instead we become less able to retrieve the information we’ve stored. The difference might sound pedantic, but actually it’s quite exciting. It means that the storage capacity of human memory is, for all practical purposes, limitless.

Bjork argues that each item we commit to memory has a ‘storage strength’ and a ‘retrieval Screen Shot 2013-06-10 at 18.12.50strength’. Some things, like the address of a friend you’ve been visiting for years as both high storage and retrieval strengths as we’re continually using the information. But if they suddenly move house their new address will have low storage strength because we haven’t known it long but its retrieval strength will be quite high as we continually review the address so as not to forget it. Other information like the address we lived at as a child has high storage strength as we’ve known it forever, but low retrieval strength because we don’t think about it very often. This accounts for our frustrating inability to suddenly be unable to recall stuff we know we know. And then there’s the stuff you’ve just taught your Year 9s. That has low storage because they’ve only just learned it and low retrieval strength because they’ve never tried to recall it.; the lower the storage strength, the more quickly retrieval strength fades. No wonder they forget it so quickly!

Making learning easier causes boosts retrieval strength in the short term leading to better performance. But because the deeper processing that encourages the long-term retention is missing, that retrieval strength quickly evaporates. The very weird fact of the matter is that forgetting creates the capacity for learning. If we don’t forget we limit our ability to learn. So we actually want students to forget some stuff! When learning is difficult, people make more mistakes, and, naturally, they infer that what they’re doing must be wrong. In the short term, difficulties inhibit performance, causing more more mistakes to be made and more apparent forgetting. But it is this forgetting that actually benefits students in the long term; relearning forgotten material takes demonstrably less time with each iteration. All of the difficulties outlined below are predicated on this simple but counter intuitive premise.


Some of these difficulties don’t seem so bad. Ebbinghaus was banging on about his ‘forgetting curve’ over a century ago and spacing is one of the most widely accepted facts in cognitive science about how the human brain learns.

The forgetting curve

The first graph shows the unsurprising fact that after we learn a piece of information we start to forget it. The longer we leave it, the more likely it is that  the memory ‘decays’ and we forget. This is the Theory of Disuse.

The effects of 'spacing' learning

The effects of ‘spacing’ learning

It seems to make complete sense that if we revisit this information at regular intervals we are much more likely to remember it, but the real reason this is so effective is the fact that as students forget, they are more receptive to learning new information.

The only problem with this as teachers is the kids perpetual moan that they’ve “done this before”. As with all things pedagogical  if you explain why you’re doing what you’re doing, all should be well.

Of all the difficulties Bjork suggests, this is the only one analysed by Hattie in Visible Learning. He gives spaced versus massed learned an effect size of d = 0.71, which is high. Of more interest perhaps is the finding that spacing increases the students’ rate of acquisition by d = 0.45 and retention by d = 0.51. This is on top of any other effects for strategies like feedback and direct instruction. Pretty cool, eh?


The new curriculum for Fruit Studies

The new curriculum for Fruit Studies

Another desirable difficulty we can introduce is to get students to ‘generate’ information instead of just reading it. If I wanted you to learn the names of a load of fruit, I could ask you to simply read and recall their names, or I could give you a prompt such as ‘or____’ and ‘orange’ would immediately come to mind. This results in ‘retrieval induced forgetting’; when retrieving information from memory the retrieved memory will be strengthened. However, competing memories will be less accessible afterwards. This implies that remembering doesn’t only produce positive effects for the remembered facts or events, but it might also lead to forgetting of other, related things in memory. Unsurprisingly, over the short term you would remember those items you had generated much better than those you hadn’t.


Another difficulty we might want to introduce is interleaving our curricula. This means that instead of delivering topics in the traditional termly blocks, we instead work out in advance the information we need students to learn over the duration of a course and mix it up so that in any given term they might study 6 or 7 different topics.

This is maybe more straightforward in a ‘skills based’ subject like English but may look very daunting for teachers of maths or science. If you deliver your course in blocks students’ performance will be much higher at the end of a term. But if you interleave your curriculum their learning will be much deeper at the and of the course. Blocking leads to short term gains but they’re deceptively compelling; it feels right to do teach this way.

But why is this?  What happens in our brains when we “mass” versus “interleaf” our learning?  Bjork speculates that blocking gives us a false sense of security;  we think we’re getting better.  In contrast, interleaving creates anxiety; the feeling things are unpredictable, and that therefore we need to take more care.


Possibly the most surprising difficulty is that of testing. Bjork refers to ‘the illusion of knowing’ (which is really just a more poetic way of describing counter-intuition.) We think we know more than in fact we do. For instance you may well have some pretty fixed ideas about testing. Which of these study patterns is more likely to result in long term learning?

1. study study study study – test

2. study study study test – test

3. study study test test – test

4. study test test test – test

Most of us will pick 1. It just feels right, doesn’t it? Spaced repetitions of study are bound to result in better results, right? Wrong. The most successful pattern is in fact No. 4. Having just one study session, followed by three short testing sessions – and then a final assessment –  will out perform any other pattern. Who knew?

But this doesn’t mean we need more summative assessment. What it suggests is that we should use testing as part of our teaching and learning repertoire. Until very recently, this was something that, quite literally, never occurred to me. Bjork’s advice is to make testing experiences low risk, frequent, and designed to include variation and distracting difficulties. such as  providing competing alternative answers to trigger retrieval of information that might be tested at another opportunity.

Reducing feedback

Eh? What’s that? Isn’t feedback the king of all teacher interventions? Isn’t it the rocky foundation upon which Dylan Wiliam’s AfL mansion is built? Well, it turns out that in some cases feedback can be counter productive. Here are a few:

– Providing feedback of success is counter productive
– Students become dependent on receiving feedback
– Waiting for feedback can slow down pace of learning
– The desire for positive feedback can prevent risk taking & attempting more challenging tasks.

I don’t know about you, but this stuff makes my head reel.

The message is don’t trust your gut. If feels right, it’s probably wrong. Easy isn’t actually easier. Deliberately choose the harder, more difficult option. Learning isn’t easy. But as Hattie reminds us, “A teacher’s job is not to make work easy. It is to make it difficult.”

And here are my slides from the Wellington College Festival of Education where I presented these ideas:


Related posts

The problem with progress Part 1, Part 2 and Part 3

Easy vs Hard

And, if you’re into a spot of research, try this: Introducing Desirable Difficulties for Educational Applications in Science

Magic glasses and the Meares-Irlen syndrome

mokoia.story-image_t460In case you missed it, I published a post on the dubious existence of dyslexia this weekend. A few people have been in touch via Twitter to tell me about the remarkable effect of Irlen lenses and that their miraculous success is clear evidence of the existence of dyslexia. Well, despite their undeniable impact on some people’s ability to read, I’m not so sure it has much of a bearing of on whether we can agree that dyslexia definitely exists.

I have a good friend who wears plain, very pale yellow spectacles when reading. She is dyslexic and convinced that she’s unable to read any but the simplest of texts without them. With her glasses on, she can read even academic texts absolutely fluently. She’s tried many different colours, all of which, apparently, helped about equally; she plumped for yellow simply because she liked yellow.

These lenses (which are spectacles of colour tinted glass), or coloured overlays (which are clear but colour tinted plastic sheets) can sometimes, as in this case of my friend, have instant and stunning effects on the ease of reading. Sometimes the effect is small and sometimes there is no effect at all. Some assert that it is ‘dyslexics’ who are helped by these lenses, or overlays. However, Wilkins et al (2001) report finding that around half of ‘normal’ students in their three samples experienced reading as easier, and did it better, through coloured overlays; some individuals improved by over thirty per cent. They found that “A substantial proportion of children reported symptoms of visual stress…‟ (ibid. p. 50) and it was particularly these children who improved most, and most reliably, when using their preferred colour overlay. Symptoms of ‘visual stress’ included letter movement, text blurring and uncomfortable brightness. Almost a third of those who noticed improvement were still voluntarily using their overlay at the end of the school year, eight months after being introduced to it.

Well and good, but Ritchie (2010) finds that “the evidence for the efficacy of coloured filters is insufficient to recommend the treatment.” He goes on to say:

The existence of visual stress as a diagnostic entity has also been questioned (Royal College of Opthalmologists, 2009). This thesis first describes the various theoretical perspectives behind the use of coloured filters, and provides an in-depth review of the current evidence. A combined crossover study and randomised controlled trial of the coloured filters used by the Irlen Institute, the major proponent of the treatment, is then described. This experiment, which set out to avoid the methodological problems observed in previous trials – most importantly, double-blinding was employed – failed to find any evidence of visual stress, or for the statistically or clinically significant benefit of coloured overlays for reading rate or comprehension on two separate reading tests, in a sample of 61 Primary School-age children with reading problems. This was despite 77% of the sample having been diagnosed with visual stress by an Irlen diagnostician and prescribed coloured overlays.

Clearly, something is going on, and when it works, it really works. But nobody seems to know why. Wilkins et al (2001) speculate that as ‘visual stress’ is reportedly more common among migraine and epilepsy sufferers they may all be due to a “hyperexciteable visual cortex”. Why not?

Scotopic sensitivity syndrome (or Meares-Irlen syndrome) is a syndrome of the visual system. As such it’s not specific to literacy although it is capable, apparently, of dramatically affecting it. But, for ‘dyslexia’ to have any meaning it must be a syndrome which is specific to literacy – not a syndrome relating to sight in general. Although the sometime success of Irlen lenses or coloured overlays at alleviating reading difficulty  has some significance,  but leaves the dyslexia debate approximately where it was before they came along.

Does that help?

Does dyslexia exist?

Dyslexic-Homeless-Man-39042-1x3qp1hSchools are packed to the gunnels (whatever they are) with students diagnosed with dyslexia. And, of the hundreds of dyslexic students I’ve taught, many have languished helplessly in the doldrums of illiteracy while some seem suddenly to make rapid and remarkable progress. This year, two students who were presented to me as dyslexic have experienced very different trajectories.

One, let’s call him Ben, had spent Years 7 and 8 being taught English in very small groups of students identified as having ‘specific learning difficulties’. In Year 9 such students are put back into mainstream classes with the expectation that the work they’ve done in the previous 2 years will have equipped them to cope. Ben arrived in my class very worried about whether he was going to ‘look thick’ and with a very low estimation of his ability. He’s a quiet, hard-working chap, however, and wants to do well. I spent a fair bit of time working with Ben at the beginning of the year and, frankly, failed to see what the problem was: his reading was a little hesitant and his writing was inaccurate but full of good ideas and definitely showed signs of conscious crafting. One lesson, I was talking to him about his work and suggested some ways he could improve his spelling. The despondency of his response was heartbreaking; “I can’t spell, sir. I’m dyslexic.”

Ben, I told him. That’s nonsense! Of course you can. We spent some time going over doubling consonants, ‘i before e’ and a few other easy to implement gems and before we knew it, his spelling had improved! We also did some work on various reading strategies like skimming and scanning and, guess what? His reading comprehension showed similar improvements. His confidence has grown massively and he’s now consistently producing C grade work. We’re now talking about what he needs to do to get an A in Year 11. If he carries on the way he has this year, he’s a shoe in.

Then there’s Carrie. She has terrible attendance, her behaviour is awful and she produces little or no work. When I met her parents at parents’ evening, they told me that none of this was Carrie’s fault; she was dyslexic you see. I didn’t see. I pointed out that even though she might find English difficult that was no excuse for not trying. At that point we reached a bit of an impasse.

Things have got a little better because, frankly, I’m not prepared to accept the bare minimum of work that Carrie feels it’s acceptable to produce. Critique protocols have made quite an impact on her and when she knows her work will be displayed publicly and will receive feedback she shows just what she’s capable of. And it’s not bad. Although doesn’t work anyway near as hard as Ben, her reading and writing have improved and she’s making what we might describe as ‘steady’ progress. But her attendance is slipping, she’s regularly excluded and there’s been talk of her having a ‘fresh start’. Through it all, her parents maintain that her dyslexia isn’t being catered for. I worry that she may not make it.

Professor Joe Elliott, at Durham University, struggles to find a  difference between a child labelled ‘dyslexic’ and a child labelled ‘a poor reader’. In other words, there isn’t a special group of kids with a different intelligence who need special intervention to help them overcome their reading problem. There are simply too many ‘dyslexic ‘children to make the term meaningful: once you get such a high number of kids labelled with a condition such as dyslexia (that’s around 375,000 in the UK), you’ve simply got to question whether there’s any real basis to the label.

But in a world where there seems to be an unquestioning acceptance of dyslexia’s existence this is not a popular view. The problem is caused, in part, by the casual, unthinking way in which we use the term. More often than not it’s used to describe any inexplicable deficit with reading/writing/spelling in an otherwise able student. We pass off the cause as something unknowable and neurological. As such, it’s no one’s fault, and teachers, and students, can shrug and pass the buck.So do dyslexics have problems not suffered by other poor readers? All sorts of symptoms have been put forward to justify the hypothesis but it has never been proven. So, there is no scientific evidence that the syndrome exists. And if “dyslexia” doesn’t refer to reading problems, either – as the dyslexia establishment maintains – then it doesn’t refer to anything which has been scientifically established. So do dyslexics have problems not suffered by other poor readers? All sorts of symptoms have been put forward to justify the hypothesis but it has never been proven. So, there is no scientific evidence that the syndrome exists.

And if “dyslexia” doesn’t refer to reading problems, either – as the dyslexia establishment maintains – then it doesn’t refer to anything which has been scientifically established. Dyslexia is an emotionally loaded term; life tends to be worse for children who find reading difficult: compared with normal readers, they are more likely to have other problems. Clumsiness, hyperactivity and poor short-term memory, for example – and having one such problem makes it more likely you’ll have another. Yet, there is no evidence that these problems cause reading difficulties.

Poor short-term memory is a case in point. It’s the symptom most often quoted as distinguishing dyslexics from other poor readers, and those who have difficulty reading are more likely to suffer from it. Yet, however disabling poor short-term memory may be, evidence suggests it neither causes reading difficulties nor predicts the outcome of intervention. A study conducted by Torgesen in 2006, showed that out of 60 children with severe reading difficulties, only eight had poor short-term memories, while almost as many – seven – had very good short-term memories. And, crucially, the children with poor short-term memories benefited from help with their reading as much as the others.

But is there any compelling evidence for it’s existence? And do dyslexics have problems not suffered by other poor readers?  Well, it’s worth noting that diagnosing and treating dyslexia is big business, and where this kind of commercial vested interest exists it’s always worth having a careful look at who’s saying what, and why. Perhaps surprisingly there’s almost as many theories on the causes and treatment of dyslexia as there are researchers, and the only constant appears to be the sometimes staggering inconsistencies which abound.

A quick dip into the literature on dyslexia illustrates the muddle:

The construct of learning disabilities has historically been difficult to define. (Fletcher 2003)

… the history of dyslexia is littered with theories that were once widely supported but now lie abandoned on the scrap heap … it is vital that we should continue to treat everything as questionable and to regard nothing as beyond dispute. Certainty is for tele-evangelists, not scientific researchers or teachers. (Ellis et al 1997 pp. 13-14) (their emphasis)

Definitions of dyslexia are notoriously varied and no single definition of dyslexia has succeeded in gaining a scientific acceptance which even approaches unanimity… Each researcher or clinician becomes attached to his or her own definition in a manner which is reminiscent of Humpty Dumpty in Lewis Carroll’s Through The Looking Glass – ‘When I use a word … it means just what I choose it to mean.’ Definitions … soon become muddied when the researcher or clinician is confronted with a variety of adult cases exhibiting highly heterogenous profiles. (Beaton et al 1997 p.2)


The diversity of theories concerning the biological underpinnings of dyslexia is impressive… It is clear there is some way to go before any consensus is reached regarding the biological basis of dyslexia … (ibid. pp. 4 – 5)

Students had individual clusters of the cognitive weaknesses usually associated with dyslexia, alongside clear strengths in some cases…They were also accompanied by widely varying individual configurations of literacy and other difficulties, so much so that the students themselves wondered if they were experiencing the same syndrome. The identification of dyslexia could not by itself predict the individual configurations, and the question of whether or not there was one distinctive syndrome became less important than the issue of learning to describe one’s particular situation to a world largely ignorant of these matters, eg “I am dyslexic and for me this means that I literally cannot write my own name, but I can read quite well and I am now using a word processor.” (Herrington 1995 pp. 6 – 7)

…the research literature provides no support for the notion that we need a scientific concept of dyslexia separate from other, more neutral, theoretical terms such as reading disabled, poor reader, less-skilled, etc. Yes, there is such a thing as dyslexia if by dyslexia we mean poor reading. But if this is what we mean, it appears that the term dyslexia no longer does the conceptual work that we thought it did. Indeed, whatever conceptual work the term is doing appears to be misleading. (Stanovich 1994 p. 588)

Over a decade ago … there was little evidence that poor readers of high and low IQ differed importantly in the primary processing mechanisms that were the cause of their reading failure. A further decade’s worth of empirical work on this issue has still failed to produce such evidence. (Stanovich & Stanovich 1997 p.3)

One of the fascinations of dyslexia for researchers is that, whatever one’s interest in human behaviour and performance, children with dyslexia will obligingly show interesting abnormalities in precisely that behaviour. (Nicolson & Fawcett 1999 p. 156)

This collection of syndromes masquerading under the umbrella of dyslexia has something of an unscientific scope; whatever symptoms or deficits researchers find are claimed as evidence of dyslexia. Everything is subsumed. The quote from Nicolson & Fawcett above says it call. Try substituting ‘dyslexia’ with ‘spina bifida’, or any other recognisable medical condition. If it were possible to say such a thing about spina bifida it would be clear that it was either a collection of syndromes  which we we were unable to distinguish from each other, or not a syndrome at all. No syndrome, however obliging, is going to show every symptom we look. for.  Although I’m no scientist, I think you’ll agree that this kind of thinking is very far from scientific. And if “dyslexia” doesn’t refer to reading problems, either – as the dyslexia establishment maintains – then it doesn’t refer to anything which has been scientifically established.

So what is it?

Dys [Greek] means difficult, abnormal, impaired, and lexikos [also Greek] means pertaining to words.  So quite literally, dyslexia means difficulty with words (Catts & Kamhi, 2005). But despite this, definitions are many and various; some are so broad as to be almost meaningless, some are confused and imprecise, and some say next to nothing. There is little consensus.

One of the most widely accepted definitions, and the one used by the World Health Organisation is this:

Dyslexia is a disorder manifested by difficulty in learning to read despite conventional instruction, adequate intelligence and sociocultural opportunity. It is dependent upon fundamental cognitive disabilities which are frequently of constitutional origin.

But a little bit of unpicking reveals how little this actually says:

Dyslexia is a difficulty with reading which may only be diagnosed if there are no other obvious causes to hand (such as poor schooling, poor parenting, low IQ or social disadvantage). It might sometimes be caused by there being something wrong with the brain.

You see? This uncertainty simply defines dyslexia as an odd difficulty with reading, given an otherwise apparently normal educational and social history. This would make it impossible for a child from a socially deprived background to be ‘dyslexic’ at all.

The Dyslexia Institute (2013) defines dyslexia as:

…a specific type of learning difficulty that primarily affects the skills involved in accurate and fluent word reading and spelling. Characteristics of dyslexia include difficulties in areas such as phonological awareness, verbal memory and verbal processing speed.

They also say that that dyslexia is “biological in origin” which runs counter to most of the research which admits that only “a very small percentage of impaired readers may well be afflicted by basic cognitive deficits of biological origin” (Valentino 2004).

The British Dyslexia Association (BDA Management Board 2007) says:

Dyslexia is a specific learning difficulty that mainly affects the development of literacy and language related skills. It is likely to be present at birth and to be life-long in its effects. It is characterised by difficulties with phonological processing, rapid naming, working memory, processing speed, and the automatic development of skills that may not match up to an individual’s other cognitive abilities.
It tends to be resistant to conventional teaching methods, but its effect can be mitigated by appropriately specific intervention, including the application of information technology and supportive counseling.

While this definition restricts itself to “literacy and language related skills”, it relates the difficulties to satisfyingly scientific sounding ‘processing’ problems. But what, exactly is being processed? What do our brains use as ‘information’, and just exactly what they are doing when they ‘process’ it? ‘Processing’ is too vague a concept to be of much use, and, as far as I can work out, cognitive science is in no position to assess it in a neurologically meaningful way. Basically, all this actually says is that the condition is potentially made up of a hodge-podge of characteristics, some, or all of which may, or may not, be present and repeats the comforting thought that the difficulties with literacy do not align with a sufferer’s intelligence. Which is fair enough: we’re all better than our limitations.

Elliott says the problem is that there is no uniform test for dyslexia:

Some tests look at memory, some at sounds and words, some at visual processing. The traditional route was to identify a child whose IQ was high, but whose reading level was low: that test is still being used in some places, although you could ask why look at a child’s IQ when deciding if they need special reading help? But the bottom line is that experts can’t agree precisely what set of problems make up the condition they call dyslexia: and if you can’t agree on what a condition is, how on earth can you test for it?

Maybe the problem is that diagnosis is really about trying to make students with poor literacy (an their parents) feel better about it. It’s much more convenient and comforting to blame the victim’s central nervous system. Occasionally even the internationally recognised expert lets this one out of the bag:

…the term dyslexia assists parents and the child to make sense of occurrences they know to exist. They know the child has difficulty with reading and spelling; they need explanations which remove the sense of self-blame. (Pumfrey & Reason 1991 p. 69)

This is remarkably similar to the ‘it’s my hormones’ explanation of obesity. It may absolves us from responsibility, but this explanation entirely fails to understand, or make any attempt to solve, the real problem which, as we all strongly suspect, has nothing to do with hormones.

And then sometimes researchers let another out of the bag as when Cooke says (2001 p. 49):

Miles (1995) has questioned whether there can be a single definition of dyslexia; she suggests instead that different people, and different groups, will want a definition to suit their own requirements. This is clearly correct …

This is shameless! If it’s OK to just pick our own, personal, definition to suit our own particular agenda, then we may as well give up. The  condition we call ‘dyslexia’ has been researched for over a century, and it’s astonishing that such confusion still exists and such woolly remarks are accepted in apparently serious, peer-reviewed, scientific journals.

The fact is that no one seems to have a satisfying, meaningful definition of dyslexia that everyone else accepts. But the  label continues to be slapped on to anyone with any kind of literacy problem. I’d argue that this is unhelpful, and, ultimately, fraudulent.

But really, why all the fuss? As long as students get help for their unspecified ‘specific learning difficulty’ who cares what we call it?

Well, the dyslexia diagnosis industry has its casualties. For some students, being labelled as dyslexic does them more harm than good. Often, in my experience, it can be an excuse for not trying. Teachers may start to have lower expectations. We concentrate on the mechanics of reading and writing rather than purpose and flair, rules rather than writing. This is inevitable once we’ve attributed a student’s problems to a single, conceptually simple, innate and unalterable cause; classic soil in which to grow learned helplessness – and  not just in students. Once a diagnosis is made, other, simpler (but less lucrative) potential causes for poor performance are ignored. And what about those who don’t get diagnosed? Does that mean they’re simply stupid?

There are, I contend, two types of dyslexia, acquired and developmental.

Acquired dyslexia is the result of trauma to the brain occurring after literacy has been acquired. Some accident results in damage to the part of the brain which had learned literacy skills. Depending on the degree of damage, the skills will be correspondingly lost. The same applies to speech, of course. Many stroke victims have their speech centres damaged and show varying degrees of loss of the power of speech. This is horrible but makes perfect cognitive sense – if you damage the part of the brain which has learned to be responsible for a  particular skill then that skill will be correspondingly damaged. How could it be otherwise?

Developmental dyslexia is, in contrast, an utterly different animal. Here, there is assumed to be a congenital neurological deficit of some kind. This may be genetic but may also be the result of damage to the foetus during gestation. At any rate, developmental dyslexia is presumed to be an affliction of those parts of the brain which will one day be expected to learn the skills of literacy. It’s an innate defect which is innately pre-wired to learn literacy only in a particular location or locations. Valentino acknowledges that “a very small percentage of impaired readers may well be afflicted by basic cognitive deficits of biological origin, especially phonological deficits that lie at the root of their difficulties in learning to read.” (2004 p30) But that’s it: a very small number. The rest are victims of, by and large, inadequate instruction.

Just cos I’m not happy with this idea of developmental dyslexia doesn’t mean that I fail to recognise that lots of people have literacy difficulties and that these difficulties can be ‘cured’. But diagnosing dyslexia disempowers and both students and teachers alike. To accept otherwise is to descend into the damp and foetid cellars of educational pessimism where learned helplessness grows like a fungus.

Right, I hear you cry, if it’s not dyslexia, what the hell is it? Well, there are so many other, more likely explanations for peculiar difficulty with literacy, each more likely than a highly selective mis-wiring of the brain. Basically though, I think most difficulties with language come down to the fact that “Reading and writing are not just cognitive activities – feelings run through them.” (Barton 1994 p. 48) and Valentino reports the following:

Results from recent intervention studies suggest that explanations of reading difficulties in most children must incorporate experiential and instructional deficits as possible causes of such difficulties, rather than focus exclusively on the types of cognitive and biological deficits that have predominated theory and research in this area of inquiry throughout the previous century. (2004 p3)

These points lead us, inexorably, to the Matthew effect.

The Matthew effect

For unto every one that hath shall be given, and he shall have abundance; but from him that hath not shall be taken away even that which he hath.

As I’ve written before, the Matthew effect is a huge factor in students’ literacy difficulties. Stanovich says that “… a strong bootstrapping mechanism that causes major individual differences in the development of reading skill is the volume of reading experience”. Daniel Rigney tells us that, “While good readers gain new skills very rapidly, and quickly move from learning to read to reading to learn, poor readers become increasingly frustrated with the act of reading, and try to avoid reading where possible.” The good reader may read several millions of words a year, whereas the poor reader reads only a few thousand (and probably hates every one) – as Robert MacFarlane says, “Every hour spent reading is an hour spent learning to write”, and we all know what practise makes! This is the Matthew effect; the rich get richer while the poor get poorer. The simple fact that less literate people read a great deal less than more literate people makes it more difficult for them to progress. Hirsch tells us that those “who possess intellectual capital when they first arrive at school have the mental scaffolding and Velcro to catch hold of what is going on, and they can turn the new knowledge into still more Velcro to gain still more knowledge”. It’s small wonder that this early advantage can never be overtaken.

Many of the symptoms that are said to identify dyslexics are now believed to be the consequence of reading difficulties, not their cause. Compared with children who read a lot, those who read little suffer educational and intellectual damage: their writing and spelling are poorer and they have less ability to organise themselves. And all poor readers are likely to suffer such problems whether they have been diagnosed as dyslexic or not. For instance, most poor readers suffer with sound awareness problems but beyond this, their conditions are so wide-ranging that it is impossible to identify any sub-group who, on the basis of their literacy difficulties, could usefully be called ‘dyslexic’.

Sadly, though, the more problems a students suffers, the more difficult it may be for them to resolve their literacy problems. Worse, the longer these problems remain unresolved, the further they will fall behind and the worse their plight becomes. For many, even if their reading improves, it can be next to impossible for them to catch up. Despite this there are cases of apparently odd difficulty in acquiring and using literacy but they almost always include some or all of the following steps:

– Relatively little (sometimes no) literacy activity in the home

– Very early failure in school, leading to general anxiety

– Literacy is experienced as impossibly difficult and humiliating.

– Students become highly risk-averse, further draining motivation and the ability to learn or perform.

– Students are diagnosed as ‘special needs’ and decide they are ‘thick’

What should we do?

Elliott says, “I can understand parents wanting to get this label, because there’s a human need for labels. But what parents believe is that the label will lead to an intervention, in much the same way that a diagnosis of a broken arm leads to effective treatment. And what I’d argue is that the intervention they receive when their child is labelled dyslexic isn’t effective – and furthermore, it’s very expensive and time-consuming, and it diverts resources away from what could be being done better to help all children with reading problems. “In fact, reading isn’t something that requires a high level of intelligence. Amongst children who struggle to read, you find some with a high IQ, some in the middle and some with a low IQ.” And interestingly, researchers at York University have found that low ability students can be helped just as much with reading problems as able students, providing the right reading programme is implemented in the right way. If resources are thrown at a particular group of students suffering from a particular syndrome, what happens to students who haven’t paid the £300 quid or so needed to receive this label?

Maybe we should agree that either every child with poor reading ability is dyslexic, or none of them is.

Teachers are routinely faced with students with officially sanctioned diagnoses of dyslexia. What do you do, if you think, as I do, that’s it’s a load of old pony? You have three choices: you can challenge the diagnosis, reinforce it or ignore it. Even though I’m unconvinced, I can’t say with absolute certainty that dyslexia doesn’t exist. We all remain too ignorant as yet for dogmatism. For this reason, and also because the diagnosis may be helpful to the student (it’s certainly better than being regarded as unintelligent), I wouldn’t recommend a direct challenge to the diagnosis. Neither, though, do I recommend it be accepted – this will reinforce the disability fantasy and will lead to learned helplessness. So then, the third way; the ‘Mmmm…’ approach. When told a student is ‘dyslexic’ I say “Mmmm…” and then teach as if the diagnosis had never been made; I treat the student as completely ‘normal’. I dismiss dyslexia from my own mind and, hopefully, the student will feel it fade from theirs too. It is at this point that progress can be made.

I could be very wrong about this. Certainly lots of well-intentioned, knowledgeable people think so. But wrong or not, the best approached to dyslexia that I’ve come up with is not indulge sufferers in the belief that they are doomed, cursed or otherwise blighted by a condition over which they have no control. We all have ‘specific learning difficulties’ of one form or another and they’re never an excuse for not trying. We should always encourage students to overcome their difficulties and provide them with the tools to cope the curve balls their brains throw at them. For some having a label may be helpful, for others it’s most definitely not. My own experience suggests that patience, compassion and high expectations are the very best that a teacher can offer any student. And this seems to work, more often than not.

If you’re interested, there’s a fascinating Channel 4 documentary called The Dyslexia Myth to watch as well.

And here is a fascinating unpicking of some recent research from Yale on the likelihood of children having a genetic form of dyslexia.

Related posts

Magic glasses and the Meares-Irlen Syndrome

The effect of ‘affect’on learning and performance

The Matthew Effect – why literacy is so important

Specific reading disability (dyslexia): what have we learned in the past four decades? by Vellutino, Fletcher, Snowling and Scanlon

Julian Elliott also has a very good chapter on dyslexia in Bad Education: Debunking Myths in Education

Grit vs Flow – what’s better for learning?

At least it wasn’t Brain Gym!


Having just put up a new classroom display exhorting the benefits of ‘flow’ and using the idea in training materials, I have just had this thrust in front of my slack jawed face by my new bête noire, Alex Quigley! (NB: this is not true – Alex is a thoroughly decent chap, and a man I admire greatly.)

I’ve been fascinated by the idea of ‘flow’ since reading Mihály Csíkszentmihályi‘s book some years ago. The idea is that if you’re totally immersed in the experience of performing a task you will perform it to a higher standard. It’s has been billed as “the ultimate experience in harnessing the emotions in the service of performing and learning.” Who wouldn’t want to feel “a feeling of spontaneous joy, even rapture, while performing a task”? Sounds good, right? Maybe too good.

With arch educational myth buster, Tom Bennett’s warning against being an ideas magpie rattling round in my poor over burdened brain, the sense of wounded pride at being so easily gulled is an almost physical thing. I should have known better. As he says, “75% of the educational research … seems to believe that science, like Adam, sprung ex nihilo, and can be invented in a day.”

Cal Newport’s rather wonderful blog Study Hacks sets out the following very interesting advice for budding concert pianists to counter the feel good molasses that is flow:

Avoid Flow. Do What Does Not Come Easy.

“The mistake most weak pianists make is playing, not practicing. If you walk into a music hall at a local university, you’ll hear people ‘playing’ by running through their pieces. This is a huge mistake. Strong pianists drill the most difficult parts of their music, rarely, if ever playing through their pieces in entirety.”

To Master a Skill, Master Something Harder.

“Strong pianists find clever ways to ‘complicate’ the difficult parts of their music. If we have problem playing something with clarity, we complicate by playing the passage with alternating accent patterns. If we have problems with speed, we confound the rhythms.”

Systematically Eliminate Weakness.

“Strong pianists know our weaknesses and use them to create strength. I have sharp ears, but I am not as in touch with the physical component of piano playing. So, I practice on a mute keyboard.”

Create Beauty, Don’t Avoid Ugliness.

“Weak pianists make music a reactive  task, not a creative task. They start, and react to their performance, fixing problems as they go along. Strong pianists, on the other hand, have an image of what a perfect performance should be like that includes all of the relevant senses. Before we sit down, we know what the piece needs to feel, sound, and even look like in excruciating detail. In performance, weak pianists try to reactively move away from mistakes, while strong pianists move towards a perfect mental image.”

And this advice seems equally pertinent for teachers as well as our students. I love the idea that practice should seek to ‘create beauty’. And as me old ma always said, you ‘ave to suffer to be bootiful!

I made this point in a post on deliberate practice last year:

Hattie says in Visible Learning for Teachers, “Sometimes learning is not fun. Instead, it is just hard work; it is deliberate practice; it is simply doing some things many times over.”

This idea has been knocking around for quite a while. Way back in 1898 Bryan & Harter were apparently telling us that it takes 10 years to become an expert in whatever field you choose to pursue. This was picked up more recently by Malcolm Gladwell in his book Outliers and has since become something of an industry with books like Bounce and The Talent Code dominating best seller lists. The current thinking is that it takes 10,000 hours of deliberate practice to achieve mastery of anything. It’s worth noting here that practice does not mean rote learning or repetitive ‘skill and drill’.

Guess what? Turns out Gladwell’s 10,000 hours rule is guff too. But, fortunately (else my self-respect might be entirely shredded) Erikson’s theory of deliberate practice still appears to hold up. Just to recap, deliberate practice is intentional, aimed at improving performance, pitched just beyond your current skill level, combined with immediate feedback and repetitious. When these conditions are met, practice improves accuracy and speed of performance on cognitive, perceptual, and motor tasks.

Angela Duckworth (no relation to Vera) has looked at deliberate practice in relation to success at Spelling Bees and explored the concept of ‘grit’. She reports that,

With each year of additional preparation, spellers devoted an increasing proportion of their preparation time to deliberate practice, despite rating the experience of such activities as more effortful and less enjoyable than the alternative preparation activities. Grittier spellers engaged in deliberate practice more so than their less gritty counterparts, and hours of deliberate practice fully mediated the prospective association between grit and spelling performance. Contrary to our prediction, we did not find evidence that the inverse association between the trait of openness to experience and spelling performance was mediated by any of the three preparation activities measured in this study.

So what can we learn from all this?

Well, firstly, there’s no substitute for hard work. And, perhaps, without that feeling that what you’re doing is actually a bit of a slog you won’t ever achieve real mastery. And secondly, sometimes the hard work is checking your facts. Mea culpa. I’m not yet sure whether ‘flow’ is completely blown out of the water as a state to aspire to; possibly this might come down to the difference between learning and performance. Flow looks great, but grit results in learning; flow is the end, and grit is the means.

Is this a false dichotomy? Maybe. But I still have to rethink my display, and my presentation for TLA Berkhamsted!


With the help of Roo Stenning (see comment below) and Pete Jones (on graphic design) we have arrived at a Grand Unified Theory of the Grit/Flow cycle:

And for an even more coherent explanation of deliberate practice as it relates to teacher development, read this wonderful post from Alex Quigley

Related posts

Deliberating about practice

Easy vs Hard

Go with the flow: the 2 minute lesson plan

The problem with progress part 1: learning vs performance

The problem with progress Part 2: Designing a curriculum for learning

Can progress be both rapid and sustained?

We start out with the aim of making the important measurable and end up making only the measurable important.

Dylan Wiliam

Does slow and steady win the race?

‘Rapid and sustained progress’ is Ofsted’s key indictor for success. Schools across the land chase this chimera like demented puppies chasing their own tails. But just when when you think you’ve gripped it firmly between your slavering jaws, the damn thing changes and slips away.

You see, the more I look into it, the more I’m convinced that progress cannot be both rapid and sustained. You cannot eat your cake and have it: we either focus on the long term goal of learning, or give in to the short term pressures of performance.

This last week has been a watershed. Over the past year or so I have become increasingly certain that making progress in lessons is a nonsense and any attempt to get students to demonstrate their progress is a meaningless pantomime that benefits no one. The past few days have seen any remaining doubts shattered.

The arguments laid out here should be adequate to convince even the most entrenched and wrongheaded champions of ‘progress in lessons’.

But there’s a further problem. Basically, slowing down the speed at which students learn increases long term retention and transfer of knowledge.  We know from the Hare and the Tortoise that travelling faster is not always better. And as in folklore, so in education; in our attempts to cover the curriculum we can sacrifice students’ learning. We’re all under increasing pressure to teach to the test and the idea of not cramming in the content is, frankly, a bit unnerving. We’re caught between a rock and a hard place: slow down and risk lack of coverage, or speed up and sacrifice depth of learning.

Relying on direct instruction would seem more efficient and predictable than messing around with enquiry and discovery learning and, unsurprisingly perhaps, this is borne out by research. In our efforts to make sure we cover the course engaging students in  time-consuming, cognitively demanding activities that nurture deep understanding appears an unaffordable luxury. In GCSE English courses, reading and analysing an entire book has become a relic of a half forgotten and happier past. Breadth trumps depth. And the more pressure you’re under, the more you’re likely to skip.

The idea of pacing, asks us to plan our programmes of study so that  learning is chunked and topics are arranged coherently, with a clear sense of how long different elements will take to teach. Obviously, we also need to allow for some unpredictability depending on the particular mix of kids in front of us: as teachers we need to keep our expectations high but keep a weather eye on areas in which our students struggle. In this way we can arrive at the most efficient way of rigorously covering our content while still allowing time for the experimentation and inquiry which which is so vital for long term retention. This is something Maurice Holt and the Slow Education gang have been bandying about for some time but I was fascinated to discover the work of cognitive psychologist, Robert Bjork which seems to bang the same drum.

Bjork describes conditions which slow the pace of learning but increase long term performance as ‘desirable difficulties’. Now in a world in which ‘rapid and sustained progress’ is sought we might have a problem: rapid progress may well be the enemy of sustained progress. And as such, techniques which favour sustaining progress at the expense of the speed at which this progress might well go unappreciated by a pitiless inspection regime.

But, as ever, we need to do what is right, hold our nerve and be ready to explain our thinking. Here’s an outline of some of the techniques we can use to concentrate on sustaining progress:

Variation – As we’re all aware, variety is the spice of life; a steady diet of the same-old same-old, no matter how delicious, is enough to put off anyone. So it should come as no surprise that using the same lesson structures will, eventually  start to pall. The research on variation in lesson design looks specifically at mixing up deep and surface learning strategies rather than trying to cram in as much deep learning as students can stomach. This may at first seem counter intuitive; surely we’re better off prompting students to make profound connections between the things they know and challenging them to make increasing abstract generalisations and hypothesises? No, apparently not. The theory suggests that getting students to remember facts and expand their knowledge base is just as important as getting them to creatively manipulate all the stuff they’re digesting.

The point is that if we are more interested in long term retention and processing we need to provide students with a balanced diet of deep and superficial knowledge. This may be less exciting in the short term, and certainly, messing about with hexagons can look really impressive to an observer in a way that learning facts doesn’t but we need to keep our eyes on the prize and remember that being able to perform spectacularly in a lesson is not the same as being able to perform well independently in an exam.

Spacing – The concept of designing your schemes of learning so that new concepts and important information is regularly revisited is nothing new. I first came across it several years ago and, ironically, promptly forget about it. I was reminded of it when reading Nuthall’s essential The Hidden Lives of Learners and stumbled across his insight that new information has be encountered on at least three different occasions in order to be retained. Bjork contends that spacing “is one of the most robust results in all of cognitive psychology and has been shown to be effective over a large range of stimuli and retention intervals from nonsense syllables to foreign language learning across many months.”  And if we increase the spacing between reminding students about new information this “enhances learning because it decreases accessibility of the to-be-learned information” Or, in other words, the harder you work at having to call something, the more likely you are to remember it in the future.

Here’s another clip of Bjork explaining the effects of spacing on retention:

So, we need to design our curriculum to cover and recover information. There are various competing theories on the optimum spacing of learning but as long as we work out in advance when and how we’re going to revisit what we want students to retain we should be OK. One piece of home spun common sense is that we should ‘input less, output more’. What this means is that having encountered some facts we will learn far more if we try in some way to recreate this knowledge rather than just reviewing what we’ve learned. Writing this post is far better for my retention of all this cognitive psychology than simply reading it over and over. Although this is something every teacher knows instinctively, it’s nice to have some of our biases confirmed by the boffins.

Interleaving vs blocking – If we accept that spacing works, then interleaving is a great way to design your scheme of learning. If we’re just hanging around for a few days waiting for the optimum time to have elapsed before reteaching what will we fill the intervening lessons with? Happily, interleaving provides the answer.

Traditionally we ‘block’ learning. This means that students exhaustively focus on one particular concept or type of problem until they are considered to have mastered it and then they move on to another, related topic and so until they have studied all the components of a course in discrete blocks. Interleaving, on the other hand, involves doing a bit of everything at the same time so that students might tackle several concepts or try to solves several different kinds of problems at once. Here’s the kicker: when students’ learning is ‘blocked’ they perform much better during lessons – it looks like their learning. But when they’ve finished studying all their blocks of knowledge and are tested at the end of a course, their score decrease fairly dramatically. When teaching interleaves knowledge students perform worse during lessons but their retention at the end of a couse appears to be dramatically better.

The observant among you will have highlighted a couple of problems: if we observe lessons looking for evidence of progress, we will encourage teachers to block learning so that students perform better at the time of the observation. But is a system which (increasingly) relies on terminal exams, teachers who interleave learning (an their students) should come out on top. If the research is accurate, this really is a no-brainer.

Here’s Bjork again:

Feedback– Apparently, delaying and reducing feedback promotes learning. But this can’t be right, can it? Surely feedback is the most effective thing a teacher can be doing? Well, yes it is, but sometimes less is more.

I recently signed up to Dr Will Thalheimer’s subscription service on feedback and he has this to say:

Feedback (in most learning situations) tends to be more effective if it is delayed. It works the same way as spaced repetitions. In general, the longer the delay the better, up to a point where the delay can reduce learning.

In addition, some research shows that reducing the frequency of feedback can actually increase learning. Giving feedback after only half of all activities had more impact on long term retention than giving feedback on all of them. There are three reasons for this this:

1. Frequent feedback makes students too dependent on external validation and prevents students from developing an ability to rely on their own judgement.

2. Feedback works by “facilitating next-response planning and retrieval. In this sense, frequent feedback might provide too much facilitation in the planning of the subsequent response, thereby reducing the participant’s need to perform memory retrieval operations thought to be critical for learning”.

I’d advise taking all this with a large pinch of common sense but it’s worth considering whether the way we give feedback might be preventing students from becoming sufficiently resilient and independent.


It seems that many of the things we’re told to do in lessons because they’re great for demonstrating progress may actually be getting in the way of deep learning. If we accept that performance is not a reliable indicator of learning then we may have a problem. Most current educational thinking is all about checking students’ performance in lessons to judge their learning so that we know what to teach them next lesson. We’ve been labouring under the misapprehension that we need to check progress in lessons otherwise we’ll have no idea what students have learned. But real learning takes time. As Nuthall points out, “learning is invisible and cannot be seen in the activities of the teacher or students”. The fact that learning and forgetting can happen simultaneously mean that it is “impossible for teachers to judge what their students are learning without much more detail and individually differentiated data than they have available in the classroom.”

So, instead of setting up activities which test students’ current performance to use as evidence of progress and then acting on this to plan future lessons in the belief that we know what our students have learned we should instead listen to cognitive psychologists about how the brain works and how learning happens and design curricula and lesson accordingly.

Let’s focus on learning rather then performance, and let’s focus on progress which is sustained rather than that which is rapid.

Next post: strategies for designing lessons based on what cognitive psychology tells us: lessons for learning

Related posts

Learning vs progress

What’s deep learning and how do you do it?

The problem with progress Part 1: learning vs performance

What’s more important? Learning or progress?

I was going to make that question rhetorical, but scratch that: let’s get interactive:

Take that progress! We want learning

We’ve known since the publication of Ofsted’s Moving English Forward in March last year that demonstrating progress is not the be all and end all of an inspector’s judgments, but just in case anyone was in any doubt, Kev Bartle has forensically scoured Ofsted’s Inspection Handbook and come to these damning conclusions.

He unequivocally states that,”There is no such thing as progress within lessons. There is only learning” before going on to say:

Even Ofsted (the big organisation but sadly not always the individual inspectors or inspection teams) realise that ‘progress’ is simply a numerical measurement of the distance between a start point and an end point and therefore CANNOT IN ITSELF BE OBSERVED IN LESSONS other than through assessing how much students have learned.  ‘Progress in lessons’ is the very definition of a black box into which we, as teachers and leaders, need to shine a light.

As often seems to happen, I encounter new information when I’m ready to process it and yesterday I came across this (thanks to the prodding of the hugely knowledgeable Cristina Milos) from Robert Bjork:

Bjork says that learning and performance should be seen as distinct and should be disassociated in the minds of teachers. Performance is measurable but learning must be inferred from performance: it cannot be observed directly. That is to say, performance  is easy to observe whereas learning is not. You can tick a box to show that students’ performance has moved from x to y but you can’t tell sometimes whether learning has taken place. There are many instances where learning occurs but performance in the short term doesn’t improve, and there are instances where performance improves, but little learning seems to happen in the long term.

Learning is, as Wiliam said, “a liminal process, at the boundary between control and chaos”*.  And the problem is compounded by the fact that current performance is an unreliable indicator of learning. Performance can be propped up by predictability and current cues that are present during the lesson but won’t be present when the information is needed later. This can make it seem that a student is making rapid progress but there may not actually be any learning happening.

This is the Monkey Dance, and is a fairly accurate description of what goes on in far too many observed lessons. Teachers are primed to demonstrate their students performance and their observer can nod, smile and tick away to their embittered heart’s content. But there may be little or no learning taking place.

So clearly the problem is: if we’re going to disassociate learning and performance (as we so obviously need to do) what strategies will promote learning? Well, very helpfully in the final 30 seconds, Bjork says the following:

When you introduce things like variability, spacing, reducing the feedback, interleaving things to be learned rather than blocking the things to be learned; that appears to slow down the learning process and poses challenges but enhances long term retention and transfer.

Rober Bjork

Each of these ideas deserves their own blog post and this is something that I’ll beaver away at over half term. Any suggestions on excellent ways to embed pedagogy that promotes learning rather than progress will be very gratefully received.

As ever, Darren Mead got there first and makes the same points, but more amusingly, here:

Post script: You will of course have noticed that I’m using progress and performance interchangeably; I think this is because they’re the same, but please do feel free to dissent.

*Liminality is a fascinating subject and one worth reading more about. You could do worse than start here.

Related posts

Myths: what Ofsted want

What is learning?

Is there a right way to teach?