Williams: Schools are more likely to do what’s easiest for them if no one’s watching. Why standardized tests are critically useful, especially now
Conor Williams | December 20, 2021
Your donation will help us produce journalism like this. Please give today.
A clammy, sniffling toddler in the Washington, D.C. park near my house would have looked and sounded pretty normal — back in January 2020. But now, folks were giving the maskless toddler and her parents a wide berth as the two had an animated argument about their community’s right to know about those sniffles.
Did they really have to get a COVID test for the kid? Sure, she seemed sick, but maybe if they just made up some other reason to keep her home from child care for a day or two, she’d get better? Because if they did get that test, and if it were to come back positive, the child care center’s COVID policy would require them to keep her home for more than a week to quarantine.
It was a masterclass in the motivated reasoning that has prolonged the pandemic. They avoided getting key information, framed some consequences out of the picture — accurately diagnosing their child’s illness, infecting others, etc. — and then picked a course of action around what would be easiest for them. In our new normal, this is as horrifying as it is predictable. But, as anyone who’s ever groaned at their car’s “Check Engine” light or wondered if that mole on their elbow is growing knows, problems don’t evaporate just because we refuse to find out.
That’s why, as the pandemic finally allows schools to get back to safe, universal, uninterrupted in-person instruction, it’s important that they administer the full battery of annual federally-mandated assessments. These tests make up a relatively small part of the assessment footprint in U.S. schools: annual math and English Language Arts tests in elementary and middle school (and once more in high school); one science test in elementary, middle, and high school; and annual assessments of English learners’ progress learning English. And yet, they provide critical data points for measuring the depths of the pandemic’s effects on students’ learning.
“This is controversial, and not everybody loves it, but I think we have to assess where kids are,” former Secretary of Education Arne Duncan explained why this matters on a panel at the end of last summer. “Let’s figure out what their strengths and weaknesses are, where they are, and then hold ourselves accountable as educators: can we help accelerate them? Can we help them move? To somehow think that we can just guess, or just assume by looking at kids that we know where they are today, for me, that’s education malpractice.”
Sure, in general and as a concept, almost no one loves tests. But after two pandemic-disrupted school years, teachers, school leaders and policymakers are starving for better information about how pandemic learning models have — and haven’t — worked for different groups of students.
Notwithstanding the widespread consensus that the pandemic widened long standing opportunity and achievement gaps in American schools, data on the details remain limited. As The 74 reported this year, existing evidence on the pandemic’s impacts is piecemeal: “The most disadvantaged children were also much less likely to take last spring’s assessments, suggesting that educators don’t yet have any idea how much learning loss their students have suffered.” This is particularly true for English learners (ELs). A recent report from WIDA — the English language proficiency testing consortium serving a majority of U.S. states — found that the number of ELs tested last school year was far below normal levels.
However tempting it may be to insist that schools should simply rush past measuring students’ learning in favor of accelerating instruction, there’s an incontrovertible need for a comprehensive picture of what the pandemic stole from students and schools.
To be sure, many teachers also want more information. On the online discussion forum I run for educators working with English learners during the pandemic, one of the most frequently asked questions is, “How can we get data on students’ English language progress right now?” This challenge has been exacerbated by the difficulty of getting valid and reliable data on language learning via virtual versions of these tests.
Some critics complain that the annual math and ELA assessments are not primarily designed to guide instruction. This is true. However, they are a critical way to focus policymakers’ and the public’s attention on the ways that U.S. public education remains systemically biased against persistently marginalized communities — families of color, English learners, low-income students and others. For instance, these assessments also provide advocates for these children with a key data point to demonstrate how funding inequities affect their communities and schools.
Others warn that the tests are imperfect measures of achievement, growth and — in a deeper sense — what really matters in any kid’s education. This is also true, but, as above, only to a degree. Standardized tests can’t measure the totality of a child’s academic progress, the depth of their character or the beauty of their poetry. Still, these assessments provide a baseline of transparency about systemic inequities and a critical signal to schools: that the public wants to know whether all kids are meeting a series of core academic benchmarks, and that persistent, glaring, too-big-to-attribute-to-test-design racial and socioeconomic gaps in academic performance are unacceptable.
Finally, because these tests launched under the gaudy rhetoric accompanying No Child Left Behind, critics also insist — fairly enough — that reliance on these tests haven’t closed those gaps. And yet, this begs all the questions. Whatever the limits of test-based transparency and accountability for U.S. schools, the approach has one major advantage: it provides a record of American educational inequities.
The pre-testing era in U.S. schools wasn’t a utopia of equitable allocation of resources and highly-effective pedagogy. To the contrary: we have decades of experience showing that, absent data and accompanying pressure, the U.S. public education system defaults to reinforcing racial and socioeconomic inequities. Like the maskless family in my neighborhood, they’re more likely to do what’s easiest for them if no one’s watching.
Tests that document the persistence of these gaps don’t have to be perfect to be critically useful. As schools face an array of challenges, distractions, and pressures this fall, data from these assessments can provide critical leverage for focusing their attention — and policymakers’ — on prioritizing the needs of persistently marginalized students most harmed by the pandemic. Or, you know, we could just skip the tests again under some pretense and assume that we know the resources schools need and the priorities they should set in the coming years. It might not be equitable, efficient, or safe, but it would sure be (politically) convenient!