We ended the week last week with a post titled "The 'fun' begins soon", which took a look at the imminent changes to education policy in Ohio. We planned on detailing each of these issues over the next few weeks.
Little did we know that the 'fun' would begin that weekend. It came in the manner of the Cleveland Plain Dealer and NPR publishing a story on the changing landscape of teacher evaluations titled "Grading the Teachers: How Ohio is Measuring Teacher Quality by the Numbers".
It's a solid, long piece, worth the time taken to read it. It covers some, though not all, of the problems of using value-added measurements to evaluate teachers
Among some teachers, there’s confusion about how these measures are calculated and what they mean.
“We just know they have to do better than they did last year,” Beachwood fourth-grade teacher Alesha Trudell said.
Some of the confusion may be due to a lack of transparency around the value-added model.
The details of how the scores are calculated aren’t public. The Ohio Education Department will pay a North Carolina-based company, SAS Institute Inc., $2.3 million this year to do value-added calculations for teachers and schools. The company has released some information on its value-added model but declined to release key details about how Ohio teachers’ value-added scores are calculated.
The Education Department doesn’t have a copy of the full model and data rules either.
The department’s top research official, Matt Cohen, acknowledged that he can’t explain the details of exactly how Ohio’s value-added model works. He said that’s not a problem.
Evaluating a teacher on a secret formula isn't a practice that can be sustained, supported or defended. The article further details a common theme we hear over and over again
“It’s hard for me to think that my evaluation and possibly some day my pay could be in a 13-year-old’s hands who might be falling asleep during the test or might have other things on their mind,” said Zielke, the Columbus middle school teacher.
The article also performs analysis on several thousands value add scores, and that analysis demonstrates what we have long reported, that value-add is a poor indicator of teacher quality, with too many external factors affecting the score
[…]
Teachers say they’ve seen their value-added scores drop when they’ve had larger classes. Or classes with more students who have special needs. Or more students who are struggling to read.
Teachers who switch from one grade to another are more likely to see their value-added ratings change than teachers who teach the same grade year after year, the StateImpact/Plain Dealer analysis shows. But their ratings went down at about the same rate as teachers who taught the same grade level from one year to the next and saw their ratings change.
What are we measuring here? Surely not teacher quality, but rather socioeconomic factors and budget conditions of the schools and their students.
Teachers are intelligent people, and they are going to adapt to this knowledge in lots of unfortunate ways. It will become progressively harder to districts with poor students to recruit and retain the best teachers. But perhaps the most pernicious effect is captured at the end of the article
But Plecnik is through. She’s quitting her job at the end of this school year to go back to school and train to be a counselor — in the community, not in schools.
Plecnik was already frustrated by the focus on testing, mandatory meetings and piles of paperwork. She developed medical problems from the stress of her job, she said. But receiving the news that despite her hard work and the praise of her students and peers the state thought she was Least Effective pushed her out the door.
“That’s when I said I can’t do it anymore,” she said. “For my own sanity, I had to leave.”
The Cleveland Plain Dealer and NPR then decided to add to this stress by publishing individual teachers value-added scores - a matter we will address in our next post.