tennessee

Catastrophic failure of teacher evaluations in TN

If the on going teacher evaluation failure in Tennessee is any guide, Ohio has some rough waters ahead. Tennessee's recently passed system is very similar to Ohio's.

It requires 50 percent of the evaluation to be comprised of student achievement data—35 percent based on student growth as represented by the Tennessee Value-Added Assessment System (TVAAS) or a comparable measure and the other 15 percent based on additional measures of student achievement adopted by the State Board of Education and chosen through mutual agreement by the educator and evaluator. The remaining 50 percent of the evaluation is determined through qualitative measures such as teacher observations, personal conferences and review of prior evaluations and work.

Tennessee’s new way of evaluating classrooms “systematically failed” to identify bad teachers and provide them more training, according to a state report published Monday.

The Tennessee Department of Education found that instructors who got failing grades when measured by their students’ test scores tended to get much higher marks from principals who watched them in classrooms. State officials expected to see similar scores from both methods.

“Evaluators are telling teachers they exceed expectations in their observation feedback when in fact student outcomes paint a very different picture,” the report states. “This behavior skirts managerial responsibility.”

The education administration in Tennessee are pointing the fingers towards the in classroom evaluations, but as one commentator on the article notes,

Perhaps what we are seeing with these disparities is not a need to retrain the evaluators, but rather confirmation of what many know but the Commissioner and other proponents of this hastily conceived evaluation system refuse to see -- the evaluation criteria mistakenly relies too much on TVAAS scores when they do not in fact accurately measure teacher effectiveness.

It has been stated over, and over, that the use of value add at the teacher level is not appropriate, subject to too much variation, instability, and error. Yet when these oft warned about problems continue to manifest, they are ignored, excused and other factors scapegoated instead.

As if to make matters worse, the report (read it below) suggests that "school-wide value-added scores should be based on a one-year score rather than a three-year score. While it makes sense, where possible, to use three-year averages for individuals because of smaller sample sizes, school-wide scores can and should be based on one-year data."

So how did the value add scores stack up against observations? With 1 being the lowest grade and 5 the highest

Are we really supposed to believe that a highly educated and trained workforce, such as teachers are failing at a 24.6% rate (grades 1's and 2's). Not even the most ardent corporate education reformer has claimed that kind of number! It becomes even more absurd when one looks at student achievement. It's hard to argue that a quarter of our workforce is substandard when your student achievement is at record highs.

Instead it seems more reasonable that a more modest 2%-3% of teachers are ineffective and that the observations by professional, experienced evaluators are accurately capturing that situation.

Sadly, no where in the Tennessee report is a call for further analysis of their value add calculations.

Teacher Evaluation in Tennessee: A Report on Year 1 Implementation