met

How "top charters" screen students

It's no secret that the vast majority of Ohio charter schools are rated F, but what of some of the "high performing" schools? It is with those in mind, we read with interest the article "The Dirty Dozen: How Charter Schools Influence Student Enrollment" .

This commentary offers a classification of twelve different approaches that charter schools use to structure their student enrollment. These practices impact the likelihood of students enrolling with a given set of characteristics, be it higher (or lower) test scores, students with ‘expensive’ disabilities, English learners, students of color, or students in poverty.
[...]
Yet little attention has been paid to the mechanisms that generate these differences. One exception is an article in February of 2013, written by reporter Stephanie Simon of Reuters, which described a variety of ways that charter schools “get the students they want” (Simon, 2013):
  • Applications that are made available just a few hours a year.
  • Lengthy application forms, often printed only in English, that require student and parent essays, report cards, test scores, disciplinary records, teacher recommendations and medical records.
  • Demands that students present Social Security cards and birth certificates for their applications to be considered, even though such documents cannot be required under federal law.
  • Mandatory family interviews.
  • Assessment exams.
  • Academic prerequisites.
  • Requirements that applicants document any disabilities or special needs. The U.S. Department of Education considers this practice illegal on the college level but has not addressed the issue for K-12 schools.

We thought we would pick one charter school and test this hypothesis. We picked DAYTON EARLY COLLEGE ACADEMY, INC. (DECA), as they were elevated by they Fordham Foundation and recently testified on the budget as part of a "coalition of high performing charters".

Following introductions from Fordham’s Terry Ryan, Dayton Early College Academy’s Superintendent Judy Hennessey began to speak in front of the Subcommittee only to be interrupted by Committee Chair Senator Randy Gardner, “Senator [Peggy] Lehner has just commented you lead one of the best schools in the country.”

Jokingly Judy Hennessey nodded and said, “Now we are striving for world class.”

The application process.

Here's DECA's application, which can also be downloaded here.

High School Application 2013-14

The first thing you will note is the application form is 23 pages long, requiring hundreds of pieces of information to be entered including report cards, test scores, disciplinary records, teacher recommendations and medical records. In fact, all mechanisms mentioned in the reuters article commonly used to screen prospective students. This is a significant barrier that only the most determined parent is likely to scale.

The page where the applications can be downloaded clearly states, in bold, "Incomplete applications will not be considered."

A parent who is likely to complete such a detailed, lengthy application is likely a parent who is going to be engaged in their child's education to a greater degree than a parent who is unlikely to apply.

Furthermore, as is pointed out in the 12 approaches charters use to screen for students, this application is in English only. No second language form is available on the application webpage- making English as a second language applications far less likely.

You will also see that on page 5 of the application

Documents needed for a complete application
 Student birth certificate
 Student social security card

"Demands that students present Social Security cards and birth certificates for their applications to be considered, even though such documents cannot be required under federal law." is one of the tell-take screening mechanisms charters use.

The DECA application form also requests that applicants document any disabilities or special needs, another potential barrier spelled out in the article.

So we can plainly see then, that while DECA may produce above average results for a charter school, it can do so because it has a highly selective application process that is likely to screen out lower performing students.

The performance results

We were expecting a charter school whose leader professed to be aiming for "world class standards" to be rated Excellent with Distinction. DECA is not, indeed it is not even rated Excellent, instead it rates as "Effective" according to the latest data available from ODE.

Building IRN 009283
Building Name Dayton Early College Academy, Inc
District IRN 043844
District Name Dayton City
County Montgomery
Principal Judy Hennessey
Grade Span 7-12
Open/Closed Status (as of 9/18/2012) Open
Designation Effective
Number of standards met 14
Number of standards possible 17
Enrollment 411
Performance Index Score 2011-12 99.1
Performance Index Score 2010-11 100.5
Performance Index Score 2009-10 96.2
2012 Attendance AYP N/A
2012 Graduation AYP Not Met
2012 Reading AYP Met
2012 Math AYP Met
2012 Overall AYP Not Met
Four-Year "On-Time" Graduation Rate Numerator 2010-11 35

These aren't bad results, indeed compared to the majority of F rated charter schools they are positively giddy. But, given the arduous application screening process, and the "effective" rating, it's a far cry from being world beating, and a very far cry from the world of traditional public schools which have to accept every student from the district that walks through the door.

Gates Foundation Wastes More Money Pushing VAM

Makes it hard to trust the corporate ed reformers when they goose their stats as badly as this.

Any attempt to evaluate teachers that is spoken of repeatedly as being "scientific" is naturally going to provoke rebuttals that verge on technical geek-speak. The MET Project's "Ensuring Fair and Reliable Measures of Effective Teaching" brief does just that. MET was funded by the Bill & Melinda Gates Foundation.

At the center of the brief's claims are a couple of figures (“scatter diagrams” in statistical lingo) that show remarkable agreement in VAM scores for teachers in Language Arts and Math for two consecutive years. The dots form virtual straight lines. A teacher with a high VAM score one year can be relied on to have an equally high VAM score the next, so Figure 2 seems to say.

Not so. The scatter diagrams are not dots of teachers' VAM scores but of averages of groups of VAM scores. For some unexplained reason, the statisticians who analyzed the data for the MET Project report divided the 3,000 teachers into 20 groups of about 150 teachers each and plotted the average VAM scores for each group. Why?

And whatever the reason might be, why would one do such a thing when it has been known for more than 60 years now that correlating averages of groups grossly overstates the strength of the relationship between two variables? W.S. Robinson in 1950 named this the "ecological correlation fallacy." Please look it up in Wikipedia. The fallacy was used decades ago to argue that African-Americans were illiterate because the correlation of %-African-American and %-illiterate was extremely high when measured at the level of the 50 states. In truth, at the level of persons, the correlation is very much lower; we’re talking about differences as great as .90 for aggregates vs .20 for persons.

Just because the average of VAM scores for 150 teachers will agree with next year's VAM score average for the same 150 teachers gives us no confidence that an individual teacher's VAM score is reliable across years. In fact, such scores are not — a fact shown repeatedly in several studies.

[readon2 url="http://ed2worlds.blogspot.com/2013/01/gates-foundation-wastes-more-money.html"]Continue reading...[/readon2]

Some Choice

ODE has just released their partial school report card. It doesn't contain any final grades, but it does tell us whether schools made adequate yearly progress, and the news isn't pretty for Ohio's charter school movement.

Of the 352 charter schools listed, 58.2% of them failed to meet their adequate yearly progress (AYP) metrics.

If a student attends a school in any of Allen, Warren, Erie, Hancock, Lake, Madison, or Tuscarawas counties, not a single charter school made adequate yearly progress. Indeed, out of the 36 counties that have charter schools, 24 counties had schools that combined for more than half their charters failing to meet their adequate yearly progress.

County Not Met AYP Met AYP
Allen 100.0% 0.0%
Warren 100.0% 0.0%
Erie 100.0% 0.0%
Hancock 100.0% 0.0%
Lake 100.0% 0.0%
Madison 100.0% 0.0%
Tuscarawas 100.0% 0.0%
Stark 83.3% 16.7%
Trumbull 75.0% 25.0%
Summit 73.3% 26.7%
Montgomery 72.4% 27.6%
Mahoning 71.4% 28.6%
Hamilton 67.9% 32.1%
Richland 66.7% 33.3%
Clark 66.7% 33.3%
Fairfield 66.7% 33.3%
Morrow 66.7% 33.3%
Franklin 65.3% 34.7%
Butler 60.0% 40.0%
Lucas 58.8% 41.2%
Lorain 54.5% 45.5%
Marion 50.0% 50.0%
Columbiana 50.0% 50.0%
Greene 50.0% 50.0%
Cuyahoga 42.0% 58.0%
Portage 40.0% 60.0%
Licking 25.0% 75.0%
Muskingum 25.0% 75.0%
Seneca 25.0% 75.0%
Champaign 0.0% 100.0%
Wayne 0.0% 100.0%
Scioto 0.0% 100.0%
Coshocton 0.0% 100.0%
Hardin 0.0% 100.0%
Jackson 0.0% 100.0%
Van Wert 0.0% 100.0%
Grand Total 58.2% 41.8%

There's a lot of students in a lot of schools, in a lot of counties not being served by the "choices" they are being presented with.

New Gates Study on teacher evaluations

A new Gates study released today finds effective teacher evaluations require high standards, with multiple measures.

ABOUT THIS REPORT: This report is intended for policymakers and practitioners wanting to understand the implications of the Measures of Effective Teaching (MET) project’s interim analysis of classroom observations. Those wanting to explore all the technical aspects of the study and analysis also should read the companion research report, available at www.metproject.org.

Together, these two documents on classroom observations represent the second pair of publications from the MET project. In December 2010, the project released its initial analysis of measures of student perceptions and student achievement in Learning about Teaching: Initial Findings from the Measures of Effective Teaching Project. Two more reports are planned for mid-2012: one on the implications of assigning weights to different measures; another using random assignment to study the extent to which student assignment may affect teacher effectiveness results. ABOUT THE MET PROJECT: The MET project is a research partnership of academics, teachers, and education organizations committed to investigating better ways to identify and develop effective teaching. Funding is provided by the Bill & Melinda Gates Foundation.

The report provides for 3 takeaways.

High-quality classroom observations will require clear standards, certified raters, and multiple observations per teacher. Clear standards and high-quality training and certification of observers are fundamental to increasing inter-rater reliability. However, when measuring consistent aspects of a teacher’s practice, reliability will require more than inter- rater agreement on a single lesson. Because teaching practice varies from lesson to lesson, multiple observations will be necessary when high-stakes decisions are to be made. But how will school systems know when they have implemented a fair system? Ultimately, the most direct way is to periodically audit a representative sample of official observations, by having impartial observers perform additional observations. In our companion research report, we describe one approach to doing this.

Combining the three approaches (classroom observations, student feedback, and value-added student achievement gains) capitalizes on their strengths and offsets their weaknesses. For example, value-added is the best single predictor of a teacher’s student achievement gains in the future. But value-added is often not as reliable as some other measures and it does not point a teacher to specific areas needing improvement. Classroom observations provide a wealth of information that could support teachers in improving their practice. But, by themselves, these measures are not highly reliable, and they are only modestly related to student achievement gains. Student feedback promises greater reliability because it includes many more perspectives based on many more hours in the classroom, but not surprisingly, it is not as predictive of a teacher’s achievement gains with other students as value-added. Each shines in its own way, either in terms of predictive power, reliability, or diagnostic usefulness.

Combining new approaches to measuring effective teaching—while not perfect—significantly outperforms traditional measures. Providing better evidence should lead to better decisions. No measure is perfect. But if every personnel decision carries consequences—for teachers and students—then school systems should learn which measures are better aligned to the outcomes they value. Combining classroom observations with student feedback and student achievement gains on state tests did a better job than master’s degrees and years of experience in predicting which teachers would have large gains with another group of students. But the combined measure also predicted larger differences on a range of other outcomes, including more cognitively challenging assessments and student- reported effort and positive emotional attachment. We should refine these tools and continue to develop better ways to provide feedback to teachers. In the meantime, it makes sense to compare measures based on the criteria of predictive power, reliability, and diagnostic usefulness.

MET Gathering Feedback Practioner Brief

Proving SB5 unnecessary, public schools show significant gains

The freshly released 2010-2011 state report card has some great news to demonstrate that public schools in Ohio are not in some crisis, and radical, extreme reforms are not needed in order for our students to recevie a quality education.

The percentage of students scoring proficient on state tests increased on 21 of 26 indicators, with the strongest gains in third-grade math, eighth-grade math and 10th-grade writing. Overall, students met the state goal on 17 out of 26 indicators, one less than last year. The statewide average for all students’ test scores, known as the Performance Index, jumped 1.7 points to 95, the biggest gain since 2004-2005.

For 2010-2011, the number of districts ranked Excellent with Distinction or Excellent increased by 56 to 352. The number of schools in those same categories grew by 186 to 1,769.

76% of traditional public schools statewide have a B or better this year.

Value-Added results, which show whether students meet the expected one year of growth for students in grades 3-8 in reading and math. In 2010-2011, 79.5 percent of districts and 81.4 percent of schools met or exceeded expected Value-Added gains.

The Performance Index looks at the performance of every student, not just those who score proficient or higher. In 2010-11, 89.3 percent of districts and 71 percent of schools improved their Performance Index scores.

We'll be taking a closer look at this results and bringing you all the latest findings.

Is Gifted Education a Bright Idea?

I keep getting struck wondering wether it's the measurements that are the problem, rather than the measured outcomes. Maybe it's harder to measure progress than looking at some test results. Either way, this interesting report adds another question mark to the idea of using high stakes testing to make high stakes decisions with teaching careers. If we can't adequately measure progress with the brightest students, taught by the best teachers, that doesn't say a lot about the whole ill-concieved enterprise.

A new working paper by the National Bureau of Economic Research, in Cambridge, Mass., evaluated the effectiveness of both in-class gifted programs and magnet schools for more than 8,000 middle school students in an unnamed Southwestern school district of more than 200,000 students.

The University of Houston researchers who conducted the study found that students in these programs were more likely than other students to do in-depth coursework with top teachers and high-performing peers. Yet students who barely met the 5th grade cutoff criteria to enter the gifted programs fared no better academically in 7th grade, after a year and a half in the program, than did similarly high-potential students who just missed qualifying for gifted identification.

"You're getting these better teachers; you're getting these higher-achieving students paired up with you," said Scott A. Imberman, an economics professor and a study coauthor. "To our surprise, what happened was very little."

Here's the paper.

Is Gifted Education a Bright Idea?