Worth reading in it's entirety.
And then there are all the peripheral contributions to understanding that this line of work has made, including (but not limited to):
- That experience does matter;
- That the quality of peers affects teacher performance;
- That teachers perform differently in different schools;
- And that students’ backgrounds explain more of the variation in their performance than school related factors
Prior to the proliferation of growth models, most of these conclusions were already known to teachers and to education researchers, but research in this field has helped to validate and elaborate on them. That’s what good social science is supposed to do.
Conversely, however, what this body of research does not show is that it’s a good idea to use value-added and other growth model estimates as heavily-weighted components in teacher evaluations or other personnel-related systems. There is, to my knowledge, not a shred of evidence that doing so will improve either teaching or learning, and anyone who says otherwise is misinformed.*
As has been discussed before, there is a big difference between demonstrating that teachers matter overall – that their test-based effects vary widely, and in a manner that is not just random –and being able to accurately identify the “good” and “bad” performers at the level of individual teachers. Frankly, to whatever degree the value-added literature provides tentative guidance on how these estimates might be used productively in actual policies, it suggests that, in most states and districts, it is being done in a disturbingly ill-advised manner.
[readon2 url="http://shankerblog.org/?p=4358&mid=5417"]Read entire article[/readon2]