In his book The End of Average, Todd Rose begins with a story of mysterious crashes of United States Air Force planes in the late 1940s. After multiple inquiries led nowhere, researchers wondered if the pilots had gotten bigger since the cockpit, based on average sizes, was designed in 1926. Using ten dimensions of size most relevant to flying, one of the researchers made a startling discovery – out of the 4063 pilots measured, not one airman fit within the average range on all ten dimensions. Even more surprising, he found that using only three dimensions, less than 3.5 percent of pilots were average sized on all three. In other words, there is no such thing as an average pilot. As Rose puts it, “If you’ve designed a cockpit to fit the average pilot, you’ve actually designed it to fit no one.” In an environment where split second reaction times are demanded, a lever just out of reach can have deadly consequences. Adjustable seating was designed. Not only did it prevent deaths, but it opened the possibility for people who aren’t even close to “average” – like women – to become pilots.
The dimensions of a learner are even more multi-faceted, complex and diverge, we know, as widely. Yet we continue to measure our children according to averages that don’t fit anyone, to apply solutions based on averages, to focus on “best practice” gleaned, of course, through averages. Consider John Hattie’s widely touted list, a synthesis of now more than 1200 meta-analyses about influences on learning and ranked according to effects on student achievement. How is the effect size calculated? Through the observed change in average scores divided by the standard deviation of the scores. Hattie chooses 0.4 as the point when the effect size is significant enough to make a difference to students. How did he choose that number? The average effect size of thousands of interventions studied is 0.4.
Our focus on “best practice” is like lavishing all our time to refine the fixed pilot seat, making it more precisely fitted to average. The trouble is, no matter how effective our strategies are “on average,” they don’t necessarily (or even likely) fit the children in front of us in our classrooms. Perhaps it’s time to spend our time thinking in a different direction entirely. Who knows what possibilities might open.
The dimensions of a learner are even more multi-faceted, complex and diverge, we know, as widely. Yet we continue to measure our children according to averages that don’t fit anyone, to apply solutions based on averages, to focus on “best practice” gleaned, of course, through averages. Consider John Hattie’s widely touted list, a synthesis of now more than 1200 meta-analyses about influences on learning and ranked according to effects on student achievement. How is the effect size calculated? Through the observed change in average scores divided by the standard deviation of the scores. Hattie chooses 0.4 as the point when the effect size is significant enough to make a difference to students. How did he choose that number? The average effect size of thousands of interventions studied is 0.4.
Our focus on “best practice” is like lavishing all our time to refine the fixed pilot seat, making it more precisely fitted to average. The trouble is, no matter how effective our strategies are “on average,” they don’t necessarily (or even likely) fit the children in front of us in our classrooms. Perhaps it’s time to spend our time thinking in a different direction entirely. Who knows what possibilities might open.
No comments:
Post a Comment