Article

ALECs Extreme Legislative Agenda for 2014

We have written about ALEC, and their anti-public education agenda numerous times, see here and here. They are set to meet in Washington, DC this week for its "States and Nation Policy Summit," which is one of the ways ALEC crafts and pushes its legislative agenda for the coming year. As for their agenda, according to reports from PR Watch, here's what they have in store for public Ed.

Undermining Public Education and Lining the Pockets of For-Profit School Companies:

  • Two bills to take advantage of concern for young students at risk of not learning to read in order to enrich computer software companies, called the "Early Intervention Program Act" (PDF, p. 6) and the "K-1 Technology-Based Reading Intervention for English Learners Act" (PDF, p. 8). The former appears to be based on Utah's 2012 HB 513, which since its passage has enriched at least one ALEC corporation, Imagine Learning, to the tune of almost $2 million. Since Imagine did not offer test scores for the beginning and ending of the use of its software in the 2012-2013 school year, little is know of what benefits there may (or may not) have been to students enrolled in the new program, even as it diverted tax dollars from public schools to private corporations.
  • Another related bill, ALEC's "Student Achievement Backpack Act," also appears to be based on a Utah bill, 2013 SB 82, which provides access to student data in a "cloud-based" electronic portal format. According to Ed Week, it was inspired by a publication by Digital Learning Now!, a project of Jeb Bush's Foundation for Excellence in Education, which has ties to ALEC and is funded in part by Pearson, an international media company that bought out Connections Education, formerly a very active member of ALEC's Education Task Force.
  • Another school privatization bill called the "Course Choice Program Act" (PDF, p. 17), which appears to be based on Louisiana's "course choice," or mini-voucher, program. It lets high school students take free online classes if their regular school does not offer it, or if their school had been rated a C, D or F by the state, and began enrollment in 2013. The Louisiana Supreme Court ruled its initial funding mechanism from the state's "Minimum Foundation Program" unconstitutional. The state's voucher program has been challenged by the U.S. Justice Department on the grounds that it might promote segregation. A federal judge ruled this month that the federal government has the right to examine voucher assignments in order to make sure that's not happening, according to The Times-Picayune.

Of course, their agenda doesn't stop there, it also continues to include attacks on working people and their rights.

Undermining Workers' Rights:

  • Another bill to undermine unions, masquerading as "employee choice," called the "Public Employee Choice Act" (PDF, p. 6), is effectively "right to work" for public employees, and undermines collective bargaining by allowing workers to freeload off the benefits of union negotiations without paying the costs of union representation. The bill appears to be based on an Oregon 2014 ballot initiative, Initiative 9. It is similar to so-called "right to work," only for public employees, and its euphemistic use of the word "choice" has been appealed to the Oregon Supreme Court. The "Public Employee Choice Act Committee" has so far taken in over $52,000 and spent over $36,000 as of November 25, according to campaign finance records filed with the Oregon Secretary of State, which doesn't track the money spent and raised on dark money "issue ads.
  • Further efforts to eliminate occupational licensing for any profession, which help ensure that people who want to call themselves doctors, long-haul truckers, accountants, or barbers meet basic standards of training and expertise to guarantee that consumers are safe and get what they pay for. This extreme bill, called the "Private Certification Act" (PDF, p. 11), swims against the current of what most people want, which are to be treated by professionals who meet standards for competence or safety that have been established by law through the democratic process.

For a full rundown of some of the policies you might see pursued in Ohio in the coming months and beyond, check out the link.

Performance Management and the Pony Express

From the Harvard Business Review, a look at how unreliable the kinds of performance measures being implemented in education are, and why business is abandoning the practice.

Microsoft has decided to dump the practice of rating individuals’ performance on a numerical scale – a decision I applauded in a recent post. I argued that such rating systems don’t accomplish the task managers expect from them, which is to accelerate the performance of their people. At best, they serve other goals: allocating compensation fairly, and aligning each individual’s goals with the values and strategies of the company.

However, even if these were sufficient goals, managers would still be frustrated by how poorly ratings-based Human Capital Management (HCM) systems achieves them. Here are the two intractable problems with today’s approach.

False Precision

All current HCM systems are based on the notion that a manager can be guided to become a reliable rater of another person’s strengths and skills. The assumption is that, if we give you just the right scale, and just the right words to anchor that scale, and if we tell you to look for certain behaviors, and to rate this person a “4” if you see these behaviors frequently, and a “3” if you see them less frequently, then, over time, you and your fellow managers will become reliable raters of other people’s performance. Indeed, your ratings will come to have such high inter-rater reliability (meaning that two managers would give the same employee’s performance the same rating) that the company will use your ratings to pinpoint low performers, promote top performers, and pay everyone.

Unfortunately there is no evidence that this happens. Instead, an overwhelming amount of evidence shows that each of us is a horribly unreliable rater of another person’s strengths and skills. It appears that, when it comes to rating someone else, our own strengths, skills, and biases get in the way and we end up rating the person not on some wonderfully objective scale, but on our own scale. Our rating of the other person simply answers the question: “Does she have more or less of this strength or skill than I do?” If she does, her rating is high; if she doesn’t, it is low. Thus our rating is really a rating of us, not of her.

Some companies have tried to neutralize this effect by training the manager how to look for specific clues to the desired strength or skill. This may result in managers becoming more observant, but it doesn’t turn them into better raters. This inability to rate reliably is so entrenched that even when organizations spend millions of hours and dollars training up a roster of experts whose only job is rating, they still don’t get the reliability they seek.

As an example, over the last few years every US state has done precisely that. Each state created a cadre of experts to evaluate, in extraordinary detail, the performance of teachers. One would have expected variation, with some good teachers, some not so good, and some differently good reflected in a range of ratings from the experts. But as The New York Times reported earlier this year, the results of these ratings have revealed alarmingly little variation. These expert raters are simply not very reliable.

Scour the literature and you will discover similar studies all confirming our struggles with rating the strengths and skills of others. Our ratings of others certainly look precise. They look like objective data. But they aren’t. They offer precision, but it is a false precision. So when we decide to promote someone based upon their “4” rating, or when we say that a certain choice assignment is open only to those employees who scored an “exceeds expectations” rating, or when we pay someone based on these ratings, or suggest a particular training course based upon them, we are making decisions on bad data. Earlier this month, in a spirited defense of the forced curve, Jack Welch advocated rating people on lists of competencies so that you can, in his words, “let them know where they stand.” This is a worthy sentiment, but given how poor we are as raters, competency ratings will only ever serve to confuse people as to where they stand. As they say in the data world: “Garbage in, garbage out.”

Bad practice, streamlined

We know how great managers manage. They define very clearly the outcomes they want, and then they get to know the person in as much detail as possible to discover the best way to help this person achieve the outcomes. Whether you call this an individualized approach, a strengths-based approach, or just common sense, it’s what great managers do.

This is not what our current performance management systems do. They ignore the person and instead tell the manager to rate the person on a disembodied list of strengths and skills, often called competencies, and then to teach the person how to acquire the competencies she lacks. This is hard, and not just the rating part. The teaching part is supremely tricky — after all, what is the best way to help someone learn how to be a better “strategic thinker” or to display “learning agility?” In recognition of just how hard this is, current performance management systems attempt to streamline the process by supplying the manager with writing tips on how to phrase feedback about the person’s competencies, or lack thereof, and then by integrating the competency rating with the company’s Learning Management System so that it spits out a training course to fix a particular competency “gap.”

The problem with all of this is not just the lack of credible research proving that the best performers possess the entire list of competencies, or any showing that if you acquire competencies you lack, your performance improves – or even that, as I described above, managers are woefully inaccurate at rating the competencies of others. No, the chief problem with all of this is that it is not what the best managers actually do.

They don’t look past the real person to a list of theoretical competencies. Instead the person, with her unique mix of strengths and skills, is their singular focus. They know they can’t ignore the individual. After all, the person’s messy uniqueness is the very raw material they must mold, shape, and focus in order to create the performance they want. Cloaking it with a generic list of competencies is inherently counter-productive.

Some say that we need to rate people on their competencies because this creates “differentiation,” a necessary practice of great companies. Of course they are right in theory — companies need to be able to differentiate between their people. But the practice is outdated. Differentiation cannot mean rating people on a pre-set list of competencies. These competencies are, by definition, formulaic and so they will actually serve to limit differentiation. True differentiation means focusing on the individual — understanding the strengths of each individual, setting the right expectations for each individual, recognizing the individual, putting the right career plan together for the individual. This is what the best managers do today. They seek to understand, and capitalize on the whole individual. This is hard enough to do when you work with the person every day. It’s nigh on impossible when you are expected to peer through the filter of a formula.

Telegraph Trumps Pony Express

In 1850 it took the average piece of mail five weeks to travel from St. Joseph, Missouri to the California coast. This was frustrating, since in 1848 somebody had discovered gold in the California hills and the wild and crazy rush was on. America was moving west and needed a much more efficient, streamlined way to communicate with its West Coast, full of riches. The Pony Express was the answer. Four hundred horses. A hundred and fifty small wiry riders. Two hundred stations, and the innovation of lightweight, leather cantinas to carry the mail westward. It was a fantastically complicated arrangement requiring careful forethought, detailed planning, and not inconsiderable daring. And, having woven together this complicated system, the inventors managed to streamline the process so well that, on its very first journey, what was once a five-week trek turned into a ten-day sprint from St. Joe to Sacramento. Speeches were made, fireworks fired, a great innovation was celebrated.

And then, Baron Pavel Schilling destroyed it all.

He didn’t do it deliberately of course. But he did invent the telegraph. And with that one invention, that one concept, he created a new worldview, one that rendered obsolete the entire system that they had worked so hard to streamline.

Our current performance management systems are the Pony Express — worthy efforts to streamline a labor-intensive, time-consuming, and unnecessarily complicated process. Who is our Baron Schilling? Well let’s give that role to Microsoft’s Lisa Brummel, the executive who declared “no more ratings.”

And then there’s the biggest question. What’s the telegraph? A topic for the next post.

Dysfunction at Board of Ed by Design

Following up on our earlier piece, the AP reports:

Four members of the state’s school board have ties to businesses that have a stake in education funding and regulation, the Akron Beacon Journal reported Monday.

Two board members are lobbyists whose clients sometimes compete for education money from the state while another board member’s husband is a lobbyist for private schools.

A fourth board member is president of a private college whose school generates income from public education programs administered by the board, according to the newspaper, which worked with the NewsOutlet journalism program based at Youngstown State University.

Board members said they will abstain from votes when there is a potential conflict, police themselves and file required paperwork with the Ohio Ethics Commission.

The ethics commission says Ohio law prohibits state board members from receiving compensation for services they perform on a matter that is before the board they serve.

Members of the state Legislature cannot lobby while in office, but that rule doesn’t apply to board members, said Paul Nick, executive director of the Ohio Ethic Commission.

The reasons Ohio's State Board of Education has become so dysfunctional is becoming very apparent.

State Board Driven by Ideology

An eye opening piece from the ABJ detailing how far the State Board of Education has been corrupted since voters approved its creation

If Ohio had an all-elected state board of education as it did about 20 years ago, the current state superintendent probably wouldn’t have his job, and the school board president likely would have gotten the boot.

The reason is, the independent representative school board created by voters 60 years ago this month no longer exists.

In 1995, the legislature added eight more chairs to the 11 elected seats at the table, to be filled by the governor, and for all practical purposes, took the board out of the hands of voters and made Ohio one of only three states to have a hybrid membership.

The reason for the change: The elected 11 had endorsed a lawsuit called Nathan DeRolph vs. State of Ohio, alleging that the legislature and governor were not adequately funding public education. The governor and legislature were unhappy and changed the membership.

Now, the education of 1.8 million children is in the hands of a board that swings as far left or right as the ruling party wants it to go.

That change assured that in February this year, board president Debe Terhar, a tea party acti­vist, held her post when she came under fire for a controversial Facebook post of Adolf Hitler regarding gun regulation. The majority of the elected board members voted to oust her, but the appointed members overruled.

In March, the majority of elected members voted against hiring Gov. John Kasich’s chief education adviser, Richard Ross, as state superintendent. The appointed members put him over the top.

Today, the fact that two of 19 seats are empty — and have been for months — is of little concern because the majority represents the administration and has firm control. The board looks like this:

  • Eight of nine board committees are chaired by white men, although board gender is 9-8 male.
  • Seven of nine committees are chaired by appointees, although appointed members are outnumbered 10-7.
  • Of the seven appointees seated today, all are white and one is female.
  • The only African-American member, elected from Dayton, intends to resign by the end of the year to take a seat in city government. African-Americans account for 13 percent of Ohio’s population.
  • With the resignation of the Dayton representative, there is only one remaining member who lives in an urban district. Her vote represents about 6 percent of the 17 members, while urban districts account for about 25 percent of the Ohio student population.
  • 12 Republicans account for 70 percent of the current board’s voting power, compared with 36 percent of the state electorate registered as Republican.
  • Almost all appointees are significant Republican donors, organizers or fundraisers.
  • About a third of the members attended private schools or sent their kids to private schools. About 10 percent of the state’s students attend private schools.
  • Although the majority advocates for charter schools, which account for a little less than 10 percent of state enrollment, not one has a child in a charter school.
  • Home schoolers, who strongly oppose government intrusion into their business and represent about 2 percent of the student population, unified last year to elect one member from rural Northeast Ohio. Their representative has never had a relationship with public education and identified her primary mission as assuring that home schooling is left alone.

Continue reading...

Ohio's charter e-schools are draining funding

Two of Ohio's most prevalnt chartered eschools, ECOT and Ohio Virtual Academy (OVA) are draining resources at a drastic rate from traditional public schools which are performng at much higher levels.

ECOT drained $88,370,050.21 for just 13,721.54 students in 2013. Their graduation rate is 35.3% in four years and 37.8% in five years according to the state report card.

Between 2004 and 2013, ECOT has recevied $545,863,933.98 from Ohio school districts. As Ohio E & A says, this is more than a half billion dollars for an extremely inadequate educational venture.

Ohio Virtual Academy (OVA), siphoned $72,764,774.45 in 2013 for just 11,822.98 students. Their graduation rate is an anemic 41.6% in four years and 38.6% in five years.

Between 2003 and 2013, OVA has removed $388,613,423.52 from traditional public schools

.

How much longer can we allow hundred of millions of dollars of tax payer money to be funneled to failing for-profit e-schools?

Studies show merit pay performance disaster

The US Department of Education listed 3 new studies into teacher merit pay. All 3 studies looked at the NYC merit pay system and concluded that is has been a disaster for student performance.

Study 1 : “Teacher Incentives and Student Achievement: Evidence from New York City Public Schools”

Here's what they found

Study authors reported that the bonus program had statistically significant negative impacts on middle school achievement in math (author-reported effect size of –0.05) and English language arts (effect size of –0.03). In addition, the authors reported a statistically significant difference of –4.4 percentage points in high school graduation rates, reflecting lower graduation rates among students in intervention schools.

The study found that the teacher performance bonus program had no statistically significant impacts on elementary school achievement or teacher retention.

Study 2:Teacher Incentive Pay and Educational Outcomes: Evidence from the New York City Bonus Program

Here's what they found

The study found that the offer of a schoolwide teacher performance bonus program did not have a statistically significant effect on students’ reading achievement in either 2007–08 or 2008–09 or on mathematics achievement in 2007–08. For 2008–09, study authors reported a very small, but statistically significant, negative effect of the bonus program on mathematics achievement.

Study 3: A Big Apple for Educators: New York City’s Experiment with Schoolwide Performance Bonuses. Final Evaluation Report

Here's what they found

The study found that the New York City Schoolwide Performance Bonus Program had no discernible impact on school Progress Report scores.

Merit pay doesn't work, every study that has looked into the issue has found the same troubling results. Why then do corporate reformers continue to pursue the idea?