mathematics

Gross miscalculation

On his Sociological Eye on Education blog for the Hechinger Report, Aaron Pallas writes that in April 2011, Carolyn Abbott, who teaches mathematics to seventh- and eighth-graders at the Anderson School, a citywide gifted-and-talented school in Manhattan, received startling news. Her score on the NYC Department of Education's value-added measurement indicated only 32 percent of seventh-grade math teachers and 0 percent of eighth-grade math teachers scored worse than she. According to calculations, she was the worst eighth-grade math teacher in the city, where she has taught since 2007.

Here's the math: After a year in her classroom, her seventh-grade students scored at the 98th percentile of city students on the 2009 state test. As eighth-graders, they were predicted to score at the 97th percentile. Yet their actual performance was at the 89th percentile of students across the city, a shortfall -- 97th percentile to 89th percentile -- that placed Abbott near the rock bottom of 1,300 eighth-grade mathematics teachers. Anderson is an unusual school; the material on the state eighth-grade math exam is taught in the fifth or sixth grade. "I don't teach the curriculum they're being tested on," Abbott explained. "It feels like I'm being graded on somebody else's work." The math she teaches is more advanced, culminating in high-school level work and the New York State's Regents exam in Integrated Algebra. Of her students taking the Regents in January, all passed with flying colors, more than a third achieving a perfect score of 100.

This summer, the state will release a new iteration of the Teacher Data Reports. For Abbott, these will be a mere curiosity. She has decided to leave the classroom, and is entering the Ph.D. program in mathematics at the University of Wisconsin-Madison this fall.

[readon2 url="http://eyeoned.org/content/the-worst-eighth-grade-math-teacher-in-new-york-city_326/"]Read more...[/readon2]

Value add high stakes use cautioned

The American Mathematics Society just published a paper titled "Mathematical Intimidation:Driven by the Data", that discusses the issues with using Value Add in high stakes decision making, such as teacher evaluation. It's quite a short read, and well worth the effort.

Many studies by reputable scholarly groups call for caution in using VAMs for high-stakes decisions about teachers.

A RAND research report: The esti- mates from VAM modeling of achieve- ment will often be too imprecise to support some of the desired inferences [McCaffrey 2004, 96].

A policy paper from the Educational Testing Service’s Policy Information Center: VAM results should not serve as the sole or principal basis for making consequential decisions about teach- ers. There are many pitfalls to making causal attributions of teacher effective- ness on the basis of the kinds of data available from typical school districts. We still lack sufficient understanding of how seriously the different technical problems threaten the validity of such interpretations [Braun 2005, 17].

A report from a workshop of the Na- tional Academy of Education: Value- added methods involve complex sta- tistical models applied to test data of varying quality. Accordingly, there are many technical challenges to ascer- taining the degree to which the output of these models provides the desired estimates [Braun 2010]
[...]
Making policy decisions on the basis of value- added models has the potential to do even more harm than browbeating teachers. If we decide whether alternative certification is better than regular certification, whether nationally board cer- tified teachers are better than randomly selected ones, whether small schools are better than large, or whether a new curriculum is better than an old by using a flawed measure of success, we almost surely will end up making bad decisions that affect education for decades to come.

This is insidious because, while people debate the use of value-added scores to judge teachers, almost no one questions the use of test scores and value-added models to judge policy. Even people who point out the limitations of VAM ap- pear to be willing to use “student achievement” in the form of value-added scores to make such judgments. People recognize that tests are an im- perfect measure of educational success, but when sophisticated mathematics is applied, they believe the imperfections go away by some mathematical magic. But this is not magic. What really happens is that the mathematics is used to disguise the prob- lems and intimidate people into ignoring them—a modern, mathematical version of the Emperor’s New Clothes.

The entire, short paper, can be read below.

Mathematical Intimidation: Driven by the Data