Over the last few decades, the field of teacher education has heavily promoted the use and even the mandatory adoption by states of standardized assessments that can be used to judge how well a teacher candidate can deliver a lesson. The push has come with a promise that these "TPAs" fulfill three important and otherwise largely elusive functions:
1. They would help programs to better structure training on how to plan and deliver instruction far better than the home-grown efforts used by most programs.
2. They identify the quality of candidates and therefore can serve a gatekeeper for entry into the profession.
3. They provide information which states could use in the aggregate to hold programs accountable for their training.
We concluded four years ago that the edTPA adequately fulfills the first function. TPAs are an exponential improvement over the rubrics and observation forms most programs use to assess a "live" lesson.
A new study from three researchers at CALDER (Dan Goldhaber, James Cowan, and Roddy Theobald) provides evidence on this second function—whether TPAs actually identify the more capable candidates who deserve to be entrusted with a classroom of children. That's long overdue given the pressure that AACTE and others have put on states to adopt an instrument without any evidence that it was predictive of teacher performance.
As was widely reported in the press last week, results from graduates of teacher prep programs in Washington state (one of the first states to adopt the edTPA) are mixed. Goldhaber et al. found that a passing score in the reading portion of the edTPA is significantly predictive of teacher effectiveness in reading, but not in the math portion.
Given that the edTPA is a lot of work for programs and is costly to boot, is this enough bang for the buck? After all, instructions for candidates entail 40 pages and candidates are alerted that they can be evaluated on the edTPA on nearly 700 different items. The process consumes the attention of teacher candidates and teacher educators in their programs for a good share of candidates' semester-long student teaching placement.
There's a strong argument that the complexity is merited if it prevents unqualified persons from teaching—unless the same results could be had with a lot less time and investment. A recent study on the measures employed by District of Columbia Public Schools to screen its teacher applicants indicates that one of the components with predictive validity is simply a 10-minute audition. A few more studies with similar results may make it difficult to justify blanketing the nation with edTPA requirements.
That leaves the third function: program accountability. According to Goldhaber, the variation of scores of candidates within Washington programs is greater than the variation of scores across programs. This result, which Goldhaber did not publish, means that all but the most egregiously low performing programs are likely to have candidates whose scores vary considerably across the range.
The bottom line to date in this still-unfolding story about TPAs: the edTPA 1) is a good organizing vehicle for training, 2) may produce scores that at least partially discriminate among candidates in terms of effectiveness—but through a process that appears to be unnecessarily cumbersome, and 3) may produce scores that cannot be used to hold programs accountable because they are insufficiently related to the quality of candidate training.