Students are not widgets

We wrote the other day about the bi-annual teacher observation provision in S.B.5 that if implemented, would cause a serious administrative strain on schools. Today, promoted by a Dispatch article, we want to expand our look at the other proposed teacher evaluation policies being pushed by the governor and his education Czar

Gov. John Kasich wants teachers to be paid based on performance: They should earn more if they can prove that their students are learning.

But the tool at the heart of Kasich's merit-pay proposals is reliable with only 68 percent confidence. That's why the state plans an upgrade to make "value-added" results 95 percent reliable.

With 146,000 teachers in Ohio, even at 95% accuracy, if that can be believed, 7,300 teacher evaluations would be based on inaccurate data. That's bad enough, if only that were the problem.

But let's just take a step back for a second. What is value added assessment?

Value added assessment assumes that changes in test scores from one year to the next accurately reflect student progress in learning. It evaluates teachers by tracking progress and linking it to schools and teachers. These estimates can be used as indicators of teachers’ and schools’ effectiveness. Sounds good, right ?

In theory. In practice many teachers do not teach classes that are tested, and in many schools, as is pointed out by this terrific article, who is responsible isn't so cut and dried either

In the school where I work teachers are expected to teach reading “across the curriculum” meaning that all teachers are supposed to teach reading. Also, all teachers are supposed to teach writing “across the curriculum.” So, students would have to be tested in those areas as well. But if it taught across the curriculum, how would we know to which teacher to attribute the child’s performance?

Indeed, how would we know?

When you get beyond these obvious problems with value added assessments, there are also serious methodological problems too, as is brought to light by this paper from the Economic Policy institute

there is broad agreement among statisticians, psychometricians, and economists that student test scores alone are not sufficiently reliable and valid indicators of teacher effectiveness to be used in high-stakes personnel decisions, even when the most sophisticated statistical applications such as value-added modeling are employed.

For a variety of reasons, analyses of VAM results have led researchers to doubt whether the methodology can accurately identify more and less effective teachers.

Oh.

Back to that Dispatch article

Robert Sommers, Kasich's top education adviser, said he thinks Ohio's accountability system is ready for merit pay. Value-added has been used in Ohio only to rate schools, not teachers.

"As far as I'm concerned, it is a very, very solid system," he said. "It has had lots of years of maturation."

The Governors education Czar is simply not correct. The system as it pertains to teacher evaluation is not accurate enough, has demonstrably problematic statistical issues, and requires deeper study.

Students are not widgets being processed on a production line by a single teacher. Modern education is a team effort, and attempts to isolate individual contributions to that team effort are going to require approaches far more robust.