Less than a year ago, we were pretty disappointed about the findings from a study of Florida teacher preparation programs. It found few discernable differences among graduates' performance, regardless of differences among the programs and the candidates in them. We comforted ourselves that this study was just an opening salvo. Sure enough, another study looking at Washington state's 20 elementary teacher preparation programs has been released by Dan Goldhaber and Stephanie Liddle. We're all cheered up.
This study finds that yes, there are meaningful differences among the value added of various teacher prep programs, differences that are, in fact, at least as important as the number of years of experience a teacher has. It also finds that the effectiveness of programs' graduates changes over time and that graduates from different programs may be more or less prepared to teach specific types of students (such as those with limited proficiency in English). Both findings suggest that if we could only identify the good things some programs are doing, they could be adopted by others.
What's missing from the handful of studies to date--a problem admirably highlighted by this one--is the capacity to sort out how much of the differences among programs can be attributed to programs' admissions selectivity and how much is attributable to the actual preparation provided. Goldhaber and Liddle take a stab at this, but they are handicapped because there were simply too few candidates in the relevant subsample.
So where does this leave us? A count of recent studies on the value-added of teacher prep would show two tally marks on the side of "no differences in graduate effectiveness" (from two Florida studies) and five tally marks on the side of "differences in graduate effectiveness" (from studies in Louisiana, New York, North Carolina, Tennessee and now Washington). Thankfully, a flood of new and even better studies should soon be emerging nationwide as the necessary state data systems begin to come online.