When we launched our own standards-based review of teacher prep last year, the primary complaint we heard from teacher educators was that we were too focused on inputs and not enough on outcomes. Indeed, leaders in the field such as Charlie Reed, the Cal State University Chancellor and a former teacher educator himself, suggested we follow in the footsteps of his system, with its capacity to use "value-added assessment for program evaluation."
The Department actually took the Chancellor up on his guidance, making it clear from the start that it wanted to use value-added assessment. Lo and behold, the same critics of our "inputs-based" approach decided that outcomes measures were also unworkable. Beverly Young, Cal State's assistant vice-chancellor for academic affairs and a representative on the committee working on the new regulations, cast doubt on whether the "research base" was adequate to use value-added data for accountability.
Similarly, Michael Feuer of George Washington University wrote a blog post last year decrying the "fundamentally flawed design" of our review. Now chairing a National Academy of Education panel on the evaluation of teacher preparation programs, Feuer last week signed onto a letter to the Department stating flatly that use of value-added data for accountability purposes would not "meet appropriate standards of validity."
For now anyway, it does not appear that the field is ready to accept any system that would distinguish strong programs from the weak. In our view, that's an indefensible position... and we're guessing that also comes as no surprise.
—Graham Drake and Arthur McKee