How can we evaluate the effectiveness of pay-for-performance compensation systems if those systems are implemented only half-heartedly? That continues to be the prevailing question, as we review yet another expensive study reporting on the outcomes from another highly flawed performance pay experiment.
A recent Mathematica Policy Research study examines the experience of 66 schools housed in 10 districts that received Teacher Incentive Fund (TIF) grant money beginning in 2010. Through the grant, teachers were eligible for bonuses intended to be substantial, differentiated based on role or setting, and challenging to earn. Teachers working in the control group received a 1 percent raise each year.
So what happened? The TIF schools reported a small bump in student achievement, equaling roughly four additional weeks of learning over the three-year study period. Based on teacher surveys, researchers found little evidence of the negative outcomes that many worry could arise from a competitive pay system, such as increased dissatisfaction with the school environment or evaluation processes.
Still, this experiment continues what's now become a tradition of really poor implementation. Almost all of the teachers qualified for bonuses (70 percent), the schools shorted teachers in the terms of the size of the bonuses (probably because they were handing out too many), and the districts oversaw an apparently inadequate communications plan that left nearly half of the teachers working in the schools unaware that there were any bonuses to be earned.
The small bump in student achievement … how do we explain that? It's possible that we would have seen greater improvement with better program implementation. It's also possible that performance pay initiatives of this type have effects that mitigate the potential for any meaningful gains in student achievement. The flaws in this study run too deep for us to know the answer.
In our ongoing tally of performance pay experiments, we're putting this one in the dud column.