Identifying Effective Teachers Policy
Texas does not require that objective evidence of student learning be the preponderant criterion of its teacher evaluations.
The state allows local districts to use either a teacher evaluation instrument designed by the state (Professional Development Appraisal System) or an instrument designed by the district that the state approves. In either case, the teacher evaluation instrument must address a total of eight domains that range from professional communication and classroom management to improved academic student performance. The evaluation criteria must be based on observable, job-related behavior, including "the performance of teacher's students." In addition to classroom observations, evaluators must document teachers' contribution to improving student achievement. Each of the eight domains is scored independently, and a teacher rated unsatisfactory in one or more domains is placed on an intervention plan.
A four-tiered rating system is used: exceeds expectations, proficient, below expectations and unsatisfactory.
Texas has also received a conditional waiver from portions of the federal Elementary and Secondary Education Act (ESEA), which requires the state to include growth in student achievement as a significant factor in the evaluation framework. The state will need to address these stipulations in board rule or statute to maintain compliance with the waiver.
Texas is in the process of piloting T-TESS (Texas Teacher Evaluation and Support System), scheduled for implementation in 2016-2017. According to the T-TESS summative matrix, student growth counts for 20 percent and teacher observations and self-assessment results make up 80 percent of the final score.
Require instructional effectiveness to be the preponderant criterion of any teacher evaluation.
Although Texas requires some evidence of student achievement, it is not clear whether the state requires objective evidence of student achievement for all teacher evaluations. Texas should either require a common evaluation instrument in which evidence of student learning is the most significant criterion, or it should specifically require that student learning be the preponderant criterion in local evaluation processes. This can be accomplished by requiring objective evidence to count for at least half of the evaluation score or through other scoring mechanisms, such as a matrix, that ensure that nothing affects the overall score more. Whether state or locally developed, a teacher should not be able to receive an effective rating if found ineffective in the classroom.
Ensure that classroom observations specifically focus on and document the effectiveness of instruction.
Although Texas requires classroom observations as part of teacher evaluations, the state should articulate guidelines that focus classroom observations on the quality of instruction, as measured by student time on task, student grasp or mastery of the lesson objective and efficient use of class time.
Texas reiterated that according to the T-TESS summative matrix, student growth counts for 20 percent, and teacher observations and professional practices, including the attainment of goals and professional development results, make up 80 percent of the final score. A review of the T-TESS rubric, although not official until the 2016-2017 school year, will speak to Goal 3-B as it relates to objective, instructional and student performance-focused practices.
Value-added analysis connects student data to teacher data to measure achievement and performance.
Value-added models are an important tool for measuring student achievement and school effectiveness. These models measure individual students' learning gains, controlling for students' previous knowledge. They can also control for students' background characteristics. In the area of teacher quality, value-added models offer a fairer and potentially more meaningful way to evaluate a teacher's effectiveness than other methods schools use.
For example, at one time a school might have known only that its fifth-grade teacher, Mrs. Jones, consistently had students who did not score at grade level on standardized assessments of reading. With value-added analysis, the school can learn that Mrs. Jones' students were reading on a third-grade level when they entered her class, and that they were above a fourth-grade performance level at the end of the school year. While not yet reaching appropriate grade level, Mrs. Jones' students had made more than a year's progress in her class. Because of value-added data, the school can see that she is an effective teacher.Teachers should be
judged primarily by their impact on students.
While many factors should be considered in formally
evaluating a teacher, nothing is more important than effectiveness in the
classroom.
Unfortunately, districts have used many evaluation instruments, including
some mandated by states, that are structured so that teachers can earn a
satisfactory rating without any evidence that they are sufficiently advancing
student learning in the classroom. It is often enough that teachers appear to
be trying, not that they are necessarily succeeding.
Many evaluation
instruments give as much weight, or more, to factors that lack any direct
correlation with student performance—for example, taking professional
development courses, assuming extra duties such as sponsoring a club or
mentoring and getting along well with colleagues. Some instruments hesitate to
hold teachers accountable for student progress. Teacher evaluation instruments
should include factors that combine both human judgment and objective measures
of student learning.
Evaluation of Effectiveness: Supporting Research
Reports
strongly suggest that most current teacher evaluations are largely a
meaningless process, failing to identify the strongest and weakest teachers.
The New Teacher Project's report, "Hiring, Assignment, and Transfer
in Chicago Public Schools", July 2007 at: http://www.tntp.org/files/TNTPAnalysis-Chicago.pdf, found that the CPS
teacher performance evaluation system at that time did not distinguish strong
performers and was ineffective at identifying poor performers and dismissing
them from Chicago schools. See also Lars Lefgren and Brian Jacobs, "When Principals Rate Teachers," Education Next, Volume 6, No. 2, Spring 2006, pp.59-69. Similar
findings were reported for a larger sample in The New Teacher Project's The
Widget Effect (2009) at: http://widgeteffect.org/. See also MET Project
(2010). Learning about teaching: Initial findings from the measures of effective teaching project. Seattle, WA: Bill & Melinda Gates
Foundation.
A
Pacific Research Institute study found that in California, between 1990 and
1999, only 227 teacher dismissal cases reached the final phase of termination
hearings. The authors write: "If all these cases occurred in one year, it
would represent one-tenth of 1 percent of tenured teachers in the state. Yet,
this number was spread out over an entire decade." In Los Angeles alone,
over the same time period, only one teacher went through the dismissal process
from start to finish. See Pamela A. Riley, et al., "Contract for Failure," Pacific Research Institute (2002).
That
the vast majority of districts have no teachers deserving of an unsatisfactory
rating does not seem to correlate with our knowledge of most professions that
routinely have individuals in them who are not well suited to the job. Nor do
these teacher ratings seem to correlate with school performance, suggesting
teacher evaluations are not a meaningful measure of teacher effectiveness. For
more information on the reliability of many evaluation systems, particularly
the binary systems used by the vast majority of school districts, see S. Glazerman, D. Goldhaber, S. Loeb, S. Raudenbush, D. Staiger, and G. Whitehurst, "Evaluating Teachers: The Important Role of Value-Added." The Brookings
Brown Center Task Group on Teacher Quality, 2010.
There
is growing evidence suggesting that standards-based teacher evaluations that
include multiple measures of teacher effectiveness—both objective and
subjective measures—correlate with teacher improvement and student achievement.
For example see T. Kane, E. Taylor, J. Tyler, and A. Wooten, "Evaluating Teacher Effectiveness." Education Next, Volume 11, No. 3, Summer 2011, pp.55-60; E.
Taylor and J. Tyler, "The Effect of Evaluation on Performance: Evidence from Longitudinal Student Achievement Data of Mid-Career Teachers." NBER Working Paper No. 16877, March 2011;
as well as H. Heneman III, A. Milanowski, S. Kimball, and A. Odden, "CPRE Policy Brief: Standards-based Teacher Evaluation as a Foundation for Knowledge- and Skill-based Pay," Consortium for Policy Research, March 2006.