Every great teacher knows that her grading practices need to be fair. And any good policymaker should know that teachers expect the same of their own performance evaluations.
That's one of the reasons why in 2014, the U.S. Department of Education offered states the option to delay using student growth data to evaluate teachers during the transition to new state standards and assessments. "These changes are incredibly important," Arne Duncan explained, "and educators should not have to make them in an atmosphere of worry."
Still, many wondered: Would the new test scores actually have led to inaccurate value-added measures for teachers?
To help settle the essentially theoretical question, Ben Backes and his colleagues examined value-added scores from before, during, and after states' transition to new standards and assessments. The five-state study, conducted with support from CALDER, included changes that occurred during the Common Core era and as far back as 2001.
The conclusion: assessment data from transition years would have produced the same evaluation results for teachers as the older tests.
The researchers analyzed the data from multiple angles, including the correlation of value-added scores from year to year, the likelihood that a top- or bottom-ranked teacher in one year would rank in the same range the next year, and differences in teacher performance rankings by classroom type (advantaged or disadvantaged). All in all, transition-year math assessments performed about the same as in other years across all five states; in reading, two states saw slightly more variability during transition periods.
It's good news, but it doesn't mean the one-year moratorium was a mistake. For teachers who were already skeptical of value-added in 2014, the message that transition-year data would likely be much the same as data from any other year probably wouldn't have soothed any anxieties. A smart political decision, even if there was no crisis to avert.