The Wisdom of Crowds: Weighing an ox and learner-centered assessment

By Devin Vodicka

“The great Victorian polymath, Sir Francis Galton was at a country fair in 1906, so the story goes, and came across a competition where you had to guess the weight of an ox. Once the competition was over Galton, an explorer, meteorologist, scientist and statistician, took the 787 guesses and calculated the average, which came to 1,197 pounds. The actual weight of the ox was 1,198 pounds. In effect, the crowd had provided a near perfect answer.”

https://theconversation.com/how-to-unleash-the-wisdom-of-crowds-52774

This story, recounted in Surowiecki’s book “The Wisdom of Crowds,” provides insights into an enduring truth that input from groups tends to be more accurate than that of any individual.  We see this tendency play out in society through complex systems such as the stock market as well as sports wagers.  A compelling television example was the show Who Wants to be a Millionaire, where the studio audience was polled and the most popular answer was the correct answer 91% of the time.

The logic behind this phenomenon makes perfect sense and also reinforces the benefits of an inclusive approach to gathering the input to leverage diverse perspectives: 

“The wisdom of crowds capitalizes on the fact that when people make errors, those errors aren’t always the same. Some people will tend to overestimate, and some to underestimate. When enough of these errors are averaged together, they cancel each other out, resulting in a more accurate estimate. That’s why the effect benefits from a large and diverse “crowd.” If people are similar in the sense that they tend to make the same errors, then their errors won’t cancel each other out.” 

https://www.npr.org/sections/13.7/2018/03/12/592868569/no-man-is-an-island-the-wisdom-of-deliberating-crowds

What does guessing the weight of an ox have to do with education? Unfortunately, our educational systems often take a different approach than the one suggested above.  Instead of valuing input and feedback from multiple perspectives, in education we tend to overvalue episodic, high-stakes assessments.  Historically a single test such as the SAT or ACT has had a huge influence on a student’s opportunities to transition to higher education.  While we are slowly expanding our accountability systems, we have too-often been focused on end-of-year standardized tests in language arts and mathematics.  

This overemphasis on single measures is rooted in a belief that assessments must be designed for statistical reliability and construct validity.  Given the massive investment of sampling and research involved, this leads to a reliance on externally-created, specialized instruments which may also provide benefits in comparing results across communities.  

And while I believe that there is a benefit in these statistically-validated assessments, studies have shown that grade point averages tend to have greater predictive value than the SAT or ACT.  As we know, grades are not highly reliable due to the fact that individual teachers utilize different weighting scales and incorporate varied approaches into their grading practices.  And yet, the combination of multiple inputs over many years results in a cumulative GPA that is much like the collective wisdom of the crowd who was guessing the weight of the ox.  A single, specific grade may include an error but “When enough of these errors are averaged together, they cancel each other out, resulting in a more accurate estimate.”  

In addition, unlike the weight of an ox, many things that we say matter in education are also difficult to measure.  In fact, many of the most important elements of education–such as curiosity, empathy, collaboration, and contextualized problem-solving–are not binary outcomes and never mastered. While I won’t go into detail here about those challenges, I recommend that you check “Measuring What Matters” for more information on how to approach some of these complex challenges. 

In summary, we’ve been conditioned to think about assessment through a summative lens with an overreliance on externally-created, high-reliability, high-stakes, episodic tests. The results of these tests have historically been most valuable for audiences other than the students with particular benefits for admissions officers and policy makers who use the information to rank, sort, and select. 

What if we stepped back and reframed entirely?  What if, instead of a focus on summative tests, we reframed around the challenge and opportunity of feedback to inform learning? 

What if the student was the most important audience for the feedback? 

This holistic review would result in a number of key shifts and one of the important considerations would be to embrace the idea that many diverse perspectives and inputs are most likely to accurately inform the learning process.  

Professor David Conley refers to the benefits of an abundance of input as “cumulative validity” which, according to Tom VanderArk, could render high-stakes episodic testing obsolete. This concept holds high appeal, particularly when we think of the importance of knowledge, habits and skills that comprise whole-learner outcomes that will best support lifelong learning. 

Thus we may be surprised to learn that there are important lessons from weighing an ox over a hundred years ago. It turns out we can learn a lot from the “wisdom of crowds.” There are important perspectives that can inform learning that we have been undervaluing in our assessment systems.  

When I engaged in a research project to determine which other forms of input may be helpful, I solicited input from students, teachers, administrators, families, and researchers. In addition to academic assessments, we can see that self-reflection, peer feedback, educator observations and feedback from non-classroom based “experts” are all valuable perspectives to inform the learner. 

Let’s move away from the expectation that there is a singular authority with expertise and let’s embrace the universal reality that it is the combination of many inputs (multiple measures!) and many inputs that best informs the learner.  Let’s be sure to align what we are measuring with what we say is important and move away from assessment as a summative evaluation to feedback as a means to inform learning.  

Now is the time to make these changes. Leveraging the wisdom of crowds and appreciating multiple perspectives to inform learning is a necessary shift if we want to support learners who know who they are, thrive in community, and actively engage in the world as their best selves. 

Check out the book Learner-Centered Leadership: A Blueprint for Transformational Change in learning Communities for more insights, reflections, and suggestions.

Use #LCLeadership to share your ideas

Click here to sign up for the Learner-Centered Leadership email list

One thought on “The Wisdom of Crowds: Weighing an ox and learner-centered assessment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: